Research and develop reliable, scalable algorithms that solve enterprise problems
Infrrd’s AI residency program is a 12-month program where you’ll help create AI solutions that solve complex enterprise challenges. Our AI-native platform implements the algorithms and systems you’ll help research and create, allowing customers to digitally transform their operations.
As a resident, you’ll work at the edge of innovation in applied AI.
You’ll work with our AI Labs team to develop, build and optimize machine learning, computer vision, and NLP algorithms. Residency projects can include conducting research on emergent topics such context, reinforcement, and N-shot learning, confidence scoring, and other AI topics.
You'll create novel ways to apply these approaches and algorithms to address real-world problems.
Residents will exit the program with highly valuable, hands-on experience developing reliable, scalable and optimized algorithms to solve enterprise problems.
As an AI resident, you’ll be assigned a project that best fits your skills and interests. You’ll collaborate with our expert team to develop solutions in topic areas such as the ones below. As you investigate topics, we’ll always keep an eye on how your work will eventually be productized and create customer value.
Documents’ meanings change drastically based on context. You’ll help us use of NLP and layout to replicate how our brains map that context. Instead of memorizing the values, our models need to be able to generalize the values from other documents and surrounding text.
Reinforcement learning teaches algorithms based on penalty and reward. You’ll help develop tools to put reinforcement learning to work to help reduce the need for training data for our solutions.
Right now, machine learning models learn using lots of data. The more data we feed the system, the better it performs. You’ll help determine how to teach our models to perform better using less data.
Confidence scores of a prediction are based on understanding. You’ll help build models to reflect meaningful confidence scores that compensate for the fact that the model could be looking at a narrow-band of knowledge to generate its assumptions.