Biological robots may one day surpass machines in autonomy, intelligence, and sustainability
Even the most hardened techno-sceptics have been taken aback by the natural language chat bots that were launched in quick succession over the past few months. Their moments of apparent intuitiveness and fluency makes the user almost forget they are machines.
Impressively, researchers are noticing that some natural language models are starting to develop theories of mind. These curious outliers can speculate accurately on the motives, thoughts, and desires of animals and humans even though their originators did not programme them with this ability.
Although many have pointed out their potential value as educational and research tools, detractors have pointed out their shortcomings. They say the models cannot reflect critically on the data sets they have been trained on and are far from autonomous thinkers. As a result, worries abound about how these programs could quickly reproduce and proliferate misinformation or discriminatory speech.
Biology opens new AI horizons
Certain scientists and philosophers remain even more sceptical about the scientific potential of these models. They say that the fundamental approach underlying these chat bots will eventually hit a dead end because it is based on a flawed understanding of animal intelligence.
Some believe true intelligence – reflective and even conscious – can only ever emerge in a biological body. Only minds integrated into fully functional bodies, ones which learn by moving in and interacting with the world, will ever approach the adaptiveness, complexity, and autonomy of human brains. This view of AI is known as the ‘embodied paradigm’, sharply opposed to the dominant cognitivist or classical mode.
The older cognitivist model of AI criticised by the embodied paradigm assumes cognitive abilities emerge from simpler algorithms once they reach a certain threshold of complexity. It operates on the belief that cognitive structures in animals at heart resemble the step-by-step instructions beneath prosaic computational procedures like internet search engines, however crude and mechanical they might seem in comparison. In these classical approaches, the human brain is an information processor that manipulates symbols – an unusually complicated digital computer.
An early challenger to this model was the connectionist paradigm in AI. Although it agreed with cognitivism that computation – inputs go in, transformed by a set of rules, and new outputs come out – was the underlying basis of thought, it proposed a more multi-layered model. Instead of explicit, pre-programmed rules for transforming the information inputs, it sought to build up a simplified model of the brain. Here, networks comprised of many ‘nodes’ – standing in for living neurons – were key.
Under connectionism, these nodes, or model neurons, have varying strengths of connection with other nodes. These varying strengths of the nodes model the effects of the synapses in brains, transforming any inputs in unexpected ways. This approach has been responsible for many landmarks in recent AI history including facial recognition, reading, and grammatical ability.
However, the embodied theorists still doubt even the neural networks – closer to how biological brains function – will ever be able to model perceiving, manipulating, socialising, and emotional beings like us. Complex manual tasks are also beyond it, while they are supremely easy for an adult human being.
Although neural networks critiqued the classical approach to AI, the embodied paradigm pushed further. Despite connectionism getting closer to the mechanisms of actual brain function, some were dissatisfied with the way that connectionists still detached mental procedures from the whole organism.
The embodied paradigm’s departure from cognitivism and neural networkers is that the hardware is just as important as the software when it comes to complex cognition – in fact, the body is really where the seeds of intelligence and autonomy reside. They believe in the need to understand biological forms of intelligence as a full package with the biological body.
Biological AI is smarter
Embodied AI takes the view that the fundamental difference between the living and non-living is agency: it has sensory motor abilities, it can perceive, using the information it perceives by acting within and upon a physical environment to do thing it would not have otherwise done without that sensory knowledge.
It proceeds from the idea that human and animal bodies and brains co-evolved. Detaching the latter from the former in a bid to replicate intelligence in isolation in the abstract form of digital information is doomed to fail.
Chief among the champions of the embodied approach are bio-roboticists. Their ultimate goal is an intelligent, responsive body made up of elastic, fleshy materials close to the compounds of natural organisms.
These living robots would not just be bio-based, but bio-mimetic: their physiological encasement is made up of biological compounds, perhaps a hybrid with synthetic ones, and their brain would not only be made from such a material, it would also mimic how brains of living animals function too. Like living bodies, bio-bots can self-assemble and control their movements at multiple scales.
Rodney Brooks is a bioroboticist who summed up the driving argument of his discipline when he said that evolution had to take longer and expend more effort to make walking insect than to make thinking humans. Bioroboticists work towards intelligence from the ground up: once you manage to create a being that gets locomotion right, the complex intelligence will come more easily.
Bioroboticists are not just interested in individual movements and motor functions: it has also been fed by research into how collectives of agents work: fish schools, ant colonies – systems that appear to organise in complex ways respond intelligently to the environment. This is despite the fact that the individuals within do not have particular goals in mind (such as school of fish escaping a predator) but simply respond to what their immediate neighbours are doing at a given time.
Recent developments
Talk of artificially intelligence constructions inside bio-based machines may sound like distant fantasy, but scientists have already made concrete advances that may feed into a working model. Step by step, scientists are piecing together the many elements of a fully bio-based, programmable, and autonomous living machine.
There are, for example, bio-hybrid robots that combine both living and artificial materials. One popular research stream looks at how to replicate shoals of zebrafish using biohybrids. The researchers programmed the robots with an algorithm that closely matches the simple decisions that fish make in homing in on a prey target. Over time, the algorithm only retains those collective behaviours successful in catching prey: in short, they are adaptive.
Their demonstration showed that the seemingly complicated phenomenon of fish schooling behavior – responsive, adaptive, fluid and choreographed – can actually be replicated by setting a few simple ground rules.
Here, we see some of the embodied paradigm’s assertions bear out: autonomous learning in both individuals and collectives, a hallmark of intelligence, are the sum of physiological responses to perceived environmental changes. To perceive the environment, you first need a functioning body.
The soft biorobots
Even more intriguing than the bio-hybrid experiments are the xenobots: the closest we have come to living machines today.
Xenobots are made from a special type of cell taken from the embryos of a frog species called xenopus laevis. Assemblages of these cells display self-directed development: they organise themselves into shapes according to DNA instructions, just as they would inside the organisms they were sampled from.
Already, researchers have shown that these cell-based robots can navigate environments they have not seen before – xenopus laevis cells with a wild genome can school, self-replicate, navigate around mazes, and assemble into themselves into bots with new and different behaviours – all without brain neurons or computational models of neurons.
This is quite different to the bio-inspired algorithms we see inside studies of fish shoaling behaviour. In fact, the xenobots depart anything we would ordinarily label as being computational at all.
Despite their biological character, however, they can be controlled by humans to carry out certain kinds of tasks. Scientists say that merged with synthetic biology circuits powered with bioelectricity or biochemical energy, their range of behaviour would diversify. They could be programmed with specific kinds of movements and to assemble into specific kinds of shapes.
This type of research offers a platform for further advances in living, moving, and environmentally adaptive, and inter-relating bio-machines powered and constituted from the same biochemical parts as we are.
Biobots sustainable by default
Biobots may offer a new approach to developing intelligent machines but its potential does not end with improvements to AI. Some proponents from the embodied paradigm think biobots would be key to making computational technologies sustainable.
José Halloy of the Paris Interdisciplinary Energy Research Institute (LIED), Université Paris Diderot in France is vocal on this point, calling for society to invest in biological machines for their capacity to work with the metabolic cycles of natural ecosystems.
Biobots are in theory greener than the machines we have today, made up of a few, simple, common elements: carbon (C), hydrogen (H), nitrogen (N), oxygen (O), phosphorus (P), and sulfur (S). Mineral elements are also essential to living beings, such as boron, cobalt, iron, copper, molybdenum, selenium, silicon, tin, vanadium, and zinc, but in far lower concentrations than the typical chemical battery or machine.
Made of the same elements and structures as us, these organic replicas would fade into the geosphere without a trace, ready for nature’s recycling processes. This would mark a radical departure from the wasteful industrial setup we have today.
Halloy points to the uneliminabel environmental costs of running conventional AI – a problem that applies to all computing technologies: “Even if in terms of computing efficiency, the recent processors are excellent, the absolute power necessary to run them is also increasing exponentially” he explained in a recent text.
Then there are the environmental costs of making the hardware. The global computing and robotics industry depends on metal mining – a hugely destructive activity. At the same time, the development of thinner semiconductors mean smaller devices, but it also pushes up the cost of recycling the valuable materials locked up in these intricate structures. Miniaturisation, paradoxically, means waste.
This environmental toll of inorganic machines, he says, “necessitates reinventing our computational technologies, i.e. the founding basis of robotics and AI and other related technologies, in terms of materials, architectures, and processes, and linking these processes within new technological ecosystems, learning from the self-regulating ecological cycles of birth, growth, death, and re-use found in the natural world”.
This sounds like a groundbreaking claim. Yet Halloy is sanguine about his ideas. While they may be surprising to those raised in a world where chrome, aluminium, and steel devices do our bidding, he says that living machines have existed for a long time: “The concept of a ‘living machine’ was invented in the Neolithic period”, noted Halloy in a recent online post, “when humans began to domesticate a wide range of organisms both micro and macroscopic, plants, animals and fungi. It was also at this time that humans began to design ecosystems.”
Quite apart from the AI-powered service robots and chat platforms drawing public attention today, Halloy is proposing something different, and more expansive – robots that live, thrive, and change within a whole ecosystem of other biorobots, and within the wider social ecosystem of human economic activities.