
007 Aritra Roy Gosthipaty - Deep Learning Associate at PyImageSearch
Table of Contents
In today's interview, I am joined by Aritra, who is a Deep Learning Associate at PyImageSearch. Aritra is also a HuggingFace fellow and a Google Developer Expert in Machine Learning. He is also an avid contributor to open-source packages such as 🤗 transformers and keras-io. This interview explores his deep learning journey and love for open-source products.
Thanks, Aritra, for taking the time to join us today. Let’s start with your deep learning journey. How did you get into deep learning?
Thank you, Derrick, for having me here. I tell you, this is my first time being interviewed, and it is an amusing feeling. I continuously think I am an absolute nobody, but I want to convey my journey to people who might be motivated enough to make it big (of course, bigger than I could ever imagine).
Many variables led me to pursue Deep Learning (DL). I already had a knack for programming and mathematics. However, I did not have any formal education in Machine Learning. When I had to submit my final year project at undergraduate school, Ayush Thakur––one of my group mates–– proposed that I work at the intersection of DL and Medicine.
It took a lot of time to grasp the DL concepts. Meanwhile, Ayush was patient enough with me and my questions. Sayak Paul also mentored us in the group, and there was no looking back after that. I loved what I was doing!
For anyone curious, this was the paper that we finally published.
Did you have any prior writing experience before working for Weights and Biases as a Technical author?
I wish I could say yes to this question. Unfortunately, I had no prior writing experience, let alone technical writing. After years of writing, I still sometimes cannot convey my intuitions properly.
The "authors program" at Weights and Biases has taught me a lot. I remember being motivated to write for them because they had no constraints with topics. So I wrote several breakdown (as I like to call them) blog posts for them. You can find all the blog posts here. I wrote about a research paper into chunks of intuitive concepts in one of my “breakdown blog posts.”
How did you transition from a technical author to a machine learning engineer?
I was writing for Weights and Biases for a fair amount of time. They knew that not only did I understand the concepts of DL, but I also knew how to work with their product. The team was kind enough to offer me a job as a repository issue solver.
My main work was to track the issues of the client (wandb/client) repository and either solve them or route them to the engineering team. So I was not an ML Engineer. I was more on the customer support end.
For people still reading, I will share an exciting tip now. I had yet to gain experience working with a sizable open-source repository, and it was overwhelming at first. However, it will feel comfortable by the day once you spend enough time looking at the repository.
Looking at issues and trying to solve them was how I got a deeper understanding of the entire wandb/client repository. So, if you like an open-source repository and want to contribute, go to the issues and try to solve some of the bugs.
At PyImageSearch, you read, implement, and write about concepts from research papers. Walk us through the steps you take from the idea inception to when you have an article ready for publication.
For people who need to learn about PyImageSearch, it is an educational company focusing solely on Deep Learning and Machine Learning materials with a bias toward Computer Vision. We have 3 Million returning views per year who want to learn about the latest and greatest in the DL space.
I mainly work with my friend and colleague, Ritwik Raha. We are very active on Twitter. Being on the bird app surrounded by like-minded people in the DL and ML space, we get a sense of where things are headed. So the first instinct for both of us is to decide whether the hype would pass the test of time.
Upon deciding on a particular topic, we take one week for research. During this time, we figure out the best articles on the internet, the people working on the topic, the timeline of the academic papers, and much more.
After the week, we (Ritwik and I) share notes and argue about how the article should be structured. After coming to a consensus, we start with the basic rough drafts of the articles. A friendly reminder here (for people who want to write good pieces), do not try to make your articles perfect the first time; iterate over it many times and converge (like a neural network).
PyImageSearch is more hands-on, so most of our tutorials are supported with code ––either with reproduced results or minimal implementation. This process takes the most amount of time. We have to decide on the datasets, build the models, review the code, make it more efficient, and include the latest versions of the packages we use.
After the code, we write the article where Ritwik builds the animations–– if there are any–– and the illustrations. After the article is written, we record a video for our paid customers where we go in-depth about the topic and code.
That is how a PyImageSearch tutorial is built. It is a comprehensive process, and that is why people love us.
What’s the most exciting project you have worked on at PyImageSearch?
It has to be working with NeRFs. I stumbled on Ben Mildenhall et al. and was hooked immediately. First, however, I needed to gain a working understanding of Computer Graphics, and I knew it would be easier for me if I took this project with a friend, (enters Ritwik).
We took it upon ourselves to go through the paper and understand it in bits and pieces. The paper was so well written that the concepts were evident to us in no time. Our non-existing background in computer graphics and ray tracing was the only problem we faced.
We had to take a detour and understand the prerequisites first. Then, upon understanding, we implemented NeRFs, which was an instant hit.
PyImageSearch has three parts for NeRFs. It is constructed so that people like us, who know a little about DL but not so much about computer graphics, can easily understand NeRFs.
Which projects do you consider to have been most challenging to work on at PyImageSearch and Weights and Biases?
PyImageSearch:
The entire transformer series on PyImageSearch is something that I have enjoyed writing. Of course, it had hiccups, but it was one of my favorite blog posts. The challenging part was that we were five years late in covering the topic. There are numerous articles about it, but we had to stand out. I would stop here and ask people to read the series and see if it stands out.
Weights and Biases:
Building the documentation pipeline for the Weights and Biases team was something. The task was to extract docstrings from the Python modules, parse them, build a markdown file, and publish it to the website. The present wandb documentation website (API references) still uses my Python pipeline. This is also where I got to work under Charles, who guided me through many rough patches.
You have done a lot of work in computer vision. This space is moving too fast. How do you stay up-to-date with all the inventions?
This might infuriate many people, but my idea is not to stay on top of the research world. There will be very few papers that pass the test of time, and you have to have a keen eye to take notice of that.
Now the fun part, I do not have a keen eye, but many people do, so follow what they focus on. Simple, isn't it?
You have written data science and machine learning articles for companies, including PyImageSearch, and Weights and biases. What role has this played in your data science and machine learning career?
I love teaching (myself and others), and writing has been a vent for doing just that. My career revolves around writing good code and conveying thoughts. I would not have been confident about what I knew if I had never started writing.
I also encourage people to write. It is the best way to know where you stand. Writing is a complicated process to hone. But once you master it, you will see how far it can take you.
What role has mentorship played in your career?
Quite literally everything. I owe my entire career to mentorship and peer review.
Sayak Paul is one of the reasons I do what I do. People who get his mentorship are bound to do well. He ignites the urge to achieve things hence keeping the head grounded.
Charles Frye was an inspiration back in my wandb days. He believed in what I was doing and was never afraid to delegate critical tasks.
At PyImageSearch, my manager Jon Haase mentors me every day. I got to learn about critical thinking, making quick decisions, and working asynchronously with people around the globe. These are formidable skills to hone. I will tell you that much.
What would your learning journey look like today if you were just starting your journey to becoming a machine learning engineer? Which resources would you use, and where would you find them?
Not much would have changed. I would still ask for help, stick to a topic until I have a formidable hold, and try navigating my way.
It is never about which resources to use; it is always about sticking to the topic. One has to be in the chaos of information and leave it to the brain to make sense (or not).
Contributing to top open-source tools may sound intimidating for some engineers. How can one contribute to data science and machine learning open-source tools?
If you like a tool and you use it, go to the issues and discussions tab. Spend some time there with a penchant for helping people facing issues. Once you have done that, there is nothing that can stop you.
The key is familiarizing oneself with the repository and being kind to the community.
Tell us about your experience contributing examples to keras.io? How can one start contributing?
I love Keras and the community around it. I had seen people like Sayak Paul and Akaash Kumar Nain contribute to the examples, and that is when I knew I had to do it as well.
It is as easy as it gets. First, you build something with Keras in a Colab notebook. Then, send a PR to the keras-io repository, and either Francois or the Keras team goes through the example and guides you through it.
You have won the Google OSS Expert Prize and the TensorFlow community spotlight awards. Tell us about this experience and how you created content worth such awards.
Sayak Paul was the one who submitted our NeRF implementation to the TensorFlow team for the community spotlight award. Ritwik and I did not even know anything about it. It tells a lot about what mentorship is.
After receiving the community spotlight award, I grew confident about submitting my work elsewhere.
Ritwik always tells me to build content that will help people; if it garners some awards, that is good, but not the goal. I abide by this as much as I can.
You are a Google Developer Expert in machine learning. What do Google Developer Experts in machine learning do, and how can one become one? What’s the advantage of being part of this community?
A GDE in ML takes up the task of propagating knowledge in ML and DL. There are a bunch of perks: research grants, compute access, and a bridge between various Googlers.
People are referred to the program by an existing GDE. The key here is to collaborate with GDEs or get noticed by them. My mentor (Sayak Paul), a GDE, referred me to the program.
You are a Hugging Face Fellow. What do Hugging Face fellows do, and how can one become one? What’s the advantage of being part of this program?
A Hugging Face Fellow gets involved with the core Hugging Face team. The aim here is to propagate DL with the help of Hugging Face tools.
This program also works with referrals. You get to either work with HF Fellows, or we will notice your presence in the community and ask for collaboration. Always look out for such opportunities.
Where can people find you online? (Feel free to promote yourself or your company, including adding links)
🧡 Enjoy this newsletter?
Forward to a friend and let them know where they can subscribe (hint: it's here).
Anything else? Hit reply to send us feedback or say hello.
Join the conversation: Got more questions or comments? Join the conversation in the comments section.
mlnuggets Newsletter
Join the newsletter to receive the latest updates in your inbox.