Favourite Stop for Logistics People.
Wednesday, December 17, 2025

Towards an AVR Robotics Future

4 mins read


Craig McDonnell of ABB Robotics tells Logistics Business why the company’s new AI-enabled technology marks the next great breakthrough – Autonomous Versatile Robotics (AVR).

ABB Robotics is no newcomer. The Swiss-based global power and automation business has a long and golden history in robotics. “ABB actually invented the modern computer-controlled electrical robot, 51 years ago,” says Craig McDonnell, Business Line Managing Director Industries. “We’ve been front and centre on the robotics journey and our footprint is global.”

The official numbers back him up. The ABB Robotics division has approximately 7,000 employees. With 2024 revenues of $2.3 billion it represented about 7 percent of overall ABB Group. That makes it one of the most significant names in world robotics.

The portfolio is comprehensive. “We have the broadest range of robots on the market,” McDonnell explains. “The mechatronic side spans very small robots, typically used in electronic-type applications, all the way through to large robots that can move 800kg loads and manipulate accordingly. More recently, we have also developed collaborative robots with complete capability, as well as other combinations of mobile robotics where we combine the robot manipulator with the mobile capability.”

This history and breadth mean that when ABB targets more innovation, the world sits up to take notice. The company’s vision – which is already materialising in physical form on the warehouse floors or Europe – is for Autonomous Versatile Robotics (AVR), in which generative AI plays a game-changing role.

“AVR builds on the legacy and foundation of our traditional USPs,” he says. “We are known for the path accuracy of our robots and for the quality and life expectancy of the robot. We have a strong reputation for the reliability of our products.”

Autonomous Versatile Robotics

So, what is AVR? In the company’s official description robots will be “moving beyond fixed procedures and repeatable tasks – to a new era where they plan, adapt, and perform complex work in real time, uniting vision, precision, speed, dexterity, and mobility – all powered by generative AI.”

The question is how. First, ABB has deployed the potential of its RobotStudio® simulation suite, an extremely accurate digital twin environment to design and control robotics. “RobotStudio allows us to move into a number of new spaces and it’s on that basic platform that we’ve built our AVR approach,” explains McDonnell.

Second is the application of AI. In September this year, ABB Robotics announced an investment in California-based LandingAI to accelerate the transformation of vision AI, making it faster, more intuitive, and accessible to a broader range of users. The collaboration, claimed as a first, will integrate LandingAI’s vision AI capabilities, such as LandingLens, into ABB Robotics’ own software suite, marking another milestone in ABB’s journey towards truly autonomous and versatile robots.

In essence, LandingAI’s LandingLens platform enables the rapid training of vision AI systems to recognise and respond to objects, patterns or defects with no complex programming or AI expertise required. Barriers to adoption are therefore significantly reduced.

McDonnell describes the background to the product development as a series of learnings.

“The traditional, structured environment gave robots a predefined way of operating,” he explains. “Over the past decade or so, the robotics sector has expanded into new areas such as warehousing and logistics, and these are more unstructured environments. Yes, there remains a portion that is very structured, but you will often see situations of multiple SKUs, or fast changes to the types of products being handled. This demand led us to work on the sensing and perception of robots, and in linking that to motion control, to the navigation and dexterity of the robot so that it can handle these unstructured applications.

“More recently, we’ve started adding AI not just to the vision side but also path planning, programming and even language recognition, to enable our robots to handle unstructured environments and to make significant steps forward in ease of use.”

So, in bald layman’s terms, I venture, if a warehouse employee decides unilaterally to dump a new configuration of pallets in the middle of the warehouse floor, the robot does not cease work as it waits for new commands; it devises its own solutions.

“Yes, that’s exactly right,” he agrees, “but there are also more structured scenarios. So, for instance, if you are handling products with a high degree of variability, which perhaps the robot did not know before arriving at the product location – or the reverse, the new product arrived at the robot – the enhanced vision systems enabled by LandingAI will enable the robot to manipulate and handle the product far more simply and with far less effort than was previously possible.”

In an environment where both time and labour are precious, and expensive, commodities, this matters.

“Traditionally, this reconfiguration takes months, and it takes high degrees of integration,” he adds. “The secret sauce is that the simplified and faster usability enables even the end user, the warehouse floor personnel, to handle these variations. The fact is, if we can’t get that usability to a very high level, then widespread adoption of this new AI technology is going to be a challenge. And we believe very strongly that we’ve solved that problem.”

Deploying AI

Simple system usability and fast configuration are critical adoption factors, he confirms. “It’s been over a decade since we started deploying AI in robotics applications in logistics with great success – for instance, in the clothing sector, achieving 99% picking reliability. Those are real advances, but you probably needed an AI engineer and an expert to accompany that robot. So the advance is about moving beyond a science experiment to an industrial-hardened application that can be applied at scale and is easy to deploy and adjust as required.

“With LandingLens, LandingAI have developed a very user-friendly way to identify and characterise the object. We can then add our application and robot knowledge around specific applications – we call them ‘skins’ – so that the product you receive is 80-90% ready to go, with pre-trained algorithms and approaches, and all you then need to start is perhaps a few pictures from your mobile phone of the specified environment.”

Barriers to entry, and costs, are massively cut. “We did some calculations, and you would need hundreds of engineers for these tasks if you didn’t have this usability. Integrators and value-providers themselves would simply not be able to participate profitably without it. So that is very exciting for us.”

Exciting times, compounded by a further announcement in October that ABB Robotics has divested from the wider group to Japanese technology investor Softbank.

“ABB had announced the intention to spin off the business and were working towards that,” Craig McDonnell explains. “We met Softbank through that process; as we are closely aligned to their vision on physical AI and the transformational effect it can have on robotics, ABB decided on the divestiture approach.”

So AI is not a flash in the pan in robotics. “Physical use cases of AI are still at very early stages, but there are going to be many applications to come. And robots are going to be increasingly accessible to the people who operate them,” he predicts.



Source link

Pitstop Curation

Bringing Curated News

Leave a Reply

Your email address will not be published.