Simon's work as a research scientist leading the Algorithm Support team pushes the boundaries of existing AI knowledge. His expertise in AI reasoning is informed by his time at MIT as a PhD candidate, in the Model-based Embedded and Robotics Systems group headed by Mobi's Chief Scientist Brian Williams. His work explores what is possible when there are complex constraints and multiple steps in a decision-making process. Simon's ongoing research in AI continues to push beyond the state-of-the-art, discovering new techniques and synthesizing existing research to develop exciting new algorithms for planning and optimization.
“Part of Mobi’s “secret sauce” is how we’re able to know which AI tools to apply to which problems and how to make sure all of the solutions of the problem are working together.”
1. What do you think is one of the biggest misconceptions about AI?
One of the biggest misconceptions about AI is the idea that AI and machine learning are equivalent. Machine learning is one particular aspect of AI, but there are many other aspects. I work on a different side of AI that is related to reasoning, which explores how to take a complex AI model and make multi-step plans around it.
This misconception matters because it limits the way we think about using AI. Oversimplifying that machine learning is equal to AI and everything that AI can solve is solvable with machine learning—that line of reasoning is false. That means that you’d think you could plan complex itineraries with just machine learning, but that’s not the case. At the end of the day, machine learning is good at predicting “if you do this, then this will be the result,” by approximating existing models. But it’s not good at reasoning with these models—it’s very good at one-step prediction, but doesn’t do multi-step prediction well.
Another misconception is the idea that AI is anything that has to do with representing human intelligence. You may have heard of neural networks, which people say are representations of human brains, but they aren’t really. The human brain has additional mechanisms that aren’t in neural networks. When you use that analogy, it’s dangerous because you start romanticizing and thinking that you’re making a human brain, but you’re not!
AI has evolved beyond trying to mimic human intelligence and has gone in the direction of identifying well-defined tasks that the AI will do as well as possible. A lot of human intelligence is kind of accidental, but with AI we’re purposeful. I’m pretty sure that nothing in our human brain was designed to schedule thousands of people getting into metal cans powered by dead dinosaurs (airplanes). We just found a way for our brains to adapt to it, but it’s not optimal. We’ve explicitly designed AI to solve these types of problems.
2. What are some problems that AI can solve that would be impossible to solve otherwise?
AI can be very very accurate in reasoning over multiple steps, which humans can’t do. Humans think in “eyeballing terms”… we think that something is approximately right and move forward from there, whereas AI is very deliberate and precise with its reasoning. Humans take calculated risks and we’re bad at math. We take a nap before a flight and then hit a traffic jam and then have to reschedule. AI can make sure that everything is actually precise and possible even if there are a bunch of constraints: trip planning is a good example. It has to consider timing constraints, traversals you have to make, and you have to satisfy every involved person’s wants and needs. AI is way better positioned to solve this type of problem than humans—the ability to take care of tons of constraints and balance them in a very precise way.
3. How do you work on AI at Mobi?
I concentrate on the reasoning aspects of Mobi’s AI, like how do you optimize when there are a ton of complex and potentially conflicting decisions? How do you schedule things if you’re one party going to a ton of different locations? Or if you have a bunch of parties going to one place and competing over the same resources, how do you allocate those resources so that overall the system performs well and people get what they want?
In AI, reasoning means using models of the world to plan over multiple steps so that even with complex cause and effect and potentially conflicting constraints you can still come up with a good solution.
I look at what existing techniques in AI reasoning research we can use to solve the problems that Mobi’s trying to address, and if existing techniques don’t perform well enough I try to push beyond state of the art. I also think about how to use state-of-the-art model acquisition techniques—how do I exploit machine learning to get models to feed into my algorithms?
“A lot of human intelligence is kind of accidental, but with AI we’re purposeful.”
4. What’s unique about how Mobi is working with AI?
Mobi is very aware of all of the different aspects and parts of AI. We know that machine learning is good for model-building and we know that AI reasoning techniques are good for reasoning over these machine learning models, so Mobi uses the right tools for the right job. We don’t have one hammer and think all the world’s a nail. We have the expertise and breadth and depth to know the right tools for the right problems and to know how big, complex problems divide down into smaller, manageable problems that we can apply the right tools to.
5. What are the biggest challenges of working with AI right now?
The biggest challenge for making AI that is applicable to the real world is knowing how to scope and break down a huge, complex problem into its smaller, solvable parts. Part of Mobi’s “secret sauce” is how we’re able to know which AI tools to apply to which problems and how to make sure all of the solutions of the problem are working together.
AI Q&A: Anna Jaffe
As the CEO of Mobi, Anna Jaffe’s technical expertise as an MIT graduate and industry leader has shaped her vision to use technology to solve large-scale, intractable problems.
Read article
AI Q&A: Tesla Wells
As an algorithm engineer for Mobi working on AI's bleeding edge, Tesla is passionate about the opportunities and ethics of AI.
Read article