Why Hands-On Research is Central to a ML Engineer’s Day at Striveworks | Built In Austin

This post was originally published on this site.

While it may sound oxymoronic, working in machine learning is a hands-on job.

As AI tools have become more commonplace in daily life, users have begun replacing more tedious, hands-on tasks. According to data compiled by Forbes Advisor, some of the most common uses of AI today include responding to text and email messages, answering financial questions and planning a travel itinerary, and half of U.S. mobile users rely on an AI-powered voice search daily.

For Nicholas Lind, a senior machine learning engineer at Striveworks, building new computer vision models requires a regular amount of research. When a new obstacle arises, Lind dives into the latest scholarly articles to search for an appropriate solution or new technique. In addition to a collaborative work style, the team at Striveworks attends weekly knowledge-sharing sessions to stay actively informed in the ML space.

Built In Austin sat down with Lind to understand what his day-to-day work looks like as a ML engineer and why hands-on research plays a critical part in his success.

Nicholas Lind

Striveworks offers a cloud-native platform that allows users to build and deploy AI models.

Tell us about a typical day with Striveworks. What sorts of problems are you working on? What tools or methodologies do you employ to do your job?

As an engineer on the Striveworks research and development team, I’m focused on building systems to train, deploy and evaluate computer vision models in resource-constrained environments. Imagine, for example, that we’re trying to spot forest fires using photographs and thermal images from a fleet of small satellites. Our team would work on how we can quickly deploy a plume-identification model into production, evaluate its success over time and tweak our preprocessing and modeling steps to deliver the best possible information to firefighting teams on the ground.

My day-to-day at Striveworks includes tackling various challenges involved with training and evaluating these types of computer vision models. I’ll typically spend one to two hours a day meeting with colleagues or partner teams to identify requirements and present findings, and I’ll spend the rest of my time writing software in Python or Go. Each team has its own preferred project management methodology, but we all share many of the same development tools, like Kubernetes, for example.

Tell us about a project you’ve worked on that you’re particularly proud of. 

One project I’m proud of is Valor, an open-source evaluation store we developed to measure and rank computer vision models. Given a set of human-annotated ground truths and predictions, Valor computes evaluation metrics and compares results across discrete and geospatial metadata. These comparisons allow our users to easily answer questions like, “Which of my productionized models are underperforming at night?” and “How should I adapt my pipeline to optimize performance in a specific region or climate?”

The development process itself was iterative. We started with a rough idea of which metrics and interfaces our data scientist clients would like to work with — loosely based on the iconic COCO dataset — and gradually refined our work over time as we solved new challenges with clients and partner teams. Valor has since been integrated into Striveworks’ core products, and we’re in the process of spinning up a new data analysis offering that uses Valor at its core.

How do you stay updated with the latest advancements in machine learning, and how do you apply them to your work?

Most of my updates come from hands-on investigations. Whenever I’m working on a new problem or research space, I’ll spend time on arXiv, familiarizing myself with the latest research, tooling and techniques.

“Most of my updates come from hands-on investigations.”

Once I have a mental model for the problem and solution space, I’ll reach out to fellow Strivers who have worked on similar problems in the past — there is always at least one — to gather their perspectives. Finally, I save a bit of time each week to join Striveworks’ weekly internal knowledge-sharing session, a meeting series we call “Journal Club,” and watch for major publications on Hacker News.