Stanhope AI Models Built on Human Brain and Neuroscience

2024-03-22

The debate about how far artificial intelligence (AI) can develop revolves mainly around what constitutes human intelligence and whether machines can come close enough to human brains in terms of functionality.


UK startup Stanhope AI does not aim for general artificial intelligence (AGI), but instead builds its models based on principles from neuroscience, inspired by the predictive and hierarchical mechanisms that make up our brains.


The result is the development of an AI that does not require training. It essentially only needs to be informed of its existence and given a set of prior beliefs - then it can take off (literally) into the real world and learn from its surrounding environment using sensors. This is no different from how we expand our knowledge by seeing, hearing, and sensing, thus updating (or reinforcing) our worldview.


This spin-off from University College London recently raised £2.3 million for its neuroscience-inspired "agent-based AI".


Stanhope AI's Layered "Brain"


Stanhope AI's approach is based on the theory that the brain has a model of the world and continuously collects evidence to validate and update this model.


"This AI has several layers of 'brain', and the bottom layer of the brain is its sensors," explained Rosalyn Moran, co-founder and CEO of the company and professor of computational neuroscience. These sensors are our eyes for us, but in this case, they are cameras and lidars.


"Then these sensors feed data into the prediction layer, which tries and judges, 'Okay, I see a wall over there. Now I don't need to keep looking.' Then, at higher levels, these predictions are built into more interesting cognitive predictions. So, it's very much like a layered brain."


This is similar to how our human brains make predictions to understand the world and conserve energy (the brain is the organ with the highest energy demand in our bodies). This is a principle in neuroscience called "active inference," which is part of the free energy theory proposed by Moran's co-founder, Professor Karl Friston.


"I don't need to check every pixel on the wall to determine if it's a wall - I can fill in some gaps. That's why we think the human brain is so efficient," added Moran.


Essentially, the way you experience the world is your brain predicting what you will see, which is done to conserve energy. But thanks to our brains, they refine these predictions based on incoming sensory data. Stanhope AI's model works in a similar way, using visual inputs from the surrounding world. It then makes autonomous decisions based on new real-time data.


No Need for Large Training Datasets


Designing AI using this approach is very different from traditional machine learning methods, such as those used to train large language models, which can only operate within the range of data provided by the trainer.


"We don't need to train [our model]," said Moran. "The hard work is in building the generative model and making sure it's correct and has the prior knowledge consistent with where you want it to operate."


All of this is theoretically interesting, but for a startup born out of the lab, practical applications are also needed. Stanhope AI states that its AI can be installed on autonomous machines such as delivery drones and robots. The company is currently conducting drone tests in collaboration with partners including the German Federal Disruptive Innovation Agency and the Royal Navy.


So far, the biggest technical challenge for this startup is scaling up from running small models in a lab environment to large models that can navigate in broader environments.


"We have to adopt three mathematical approaches to make free energy calculations more efficient, in order to build a larger world for our drones," said Moran. She also added that finding suitable hardware that allows the company to access and control without relying on third parties is a major engineering challenge.


The Wave of Agent-Based AI


Stanhope AI's "active inference model" is a true autonomous model that can reconstruct and optimize its predictions. This is part of the wave of "agent-based AI," which, like the human brain, constantly learns by guessing what will happen next through the differences between predictions and real-time data, without the need for extensive (and expensive) initial training, reducing the risk of AI "hallucinations."


It is worth noting that Stanhope AI's AI is a white-box model, with "interpretability built into the architecture." Moran explained in detail, "We make sure it runs perfectly in simulation. If the AI or drone does something strange, we delve into its beliefs at that moment and why it did what it did. So, it's a very different way of developing AI." She said the idea is to change the capabilities of AI and robots to have a greater impact in real-world scenarios.


UCL Technology Fund led Stanhope AI's £2.3 million funding round. Creator Fund, MMC Ventures, Moonfire Ventures, and Rockmount Capital, as well as several industry investors, also participated in the funding, which was co-founded by Professor Rosalyn Moran, Professor Karl Friston, and Dr. Biswa Sengupta in 2021.