Huachuangpai | Mecarmand and Academician Zhang Jianwei Laboratory of Hamburg University jointly explore the multi-modal large model of robots.

  Mecarmand has reached a cooperation agreement with Academician Zhang Jianwei's Laboratory of Hamburg University to work together on the research and innovation of multi-modal large-scale robot models. In today's era of rapid development of large-scale model technology, embodied intelligence is gradually becoming the focus of attention. Mecarmand has always been committed to developing the most advanced intelligent robot technology, knowing that the key to realizing real intelligence lies in the multi-modal ability of robots.

  Academician Zhang Jianwei put forward the concept of cross-modal learning robot for the first time, and realized universal embodied intelligence by synthesizing a large number of multi-modal perceptual information such as vision, hearing and body sensation with noise and multi-sources, and took the lead in proposing and calling for the research of embodied intelligent robot in China's brain project. Academician Zhang's team has been deeply involved in frontier fields such as robot-human skill operation, multi-modal human-computer interaction, and industrial 4.0 intelligent computing for many years, and has published more than 500 papers included in SCI/EI and 6 monographs. In recent years, the team has made pioneering achievements in the fields of attention mechanism and collaborative representation-based learning, and proposed a learning algorithm based on the large model of self-attention network, which uses attention representation to realize the fine grasping and dexterous operation of object functional availability, laying the foundation for multi-modal intelligent assembly. Through close cooperation with Academician Zhang Jianwei's laboratory, we will jointly explore the development and application of multi-modal large-scale robot models.

  According to the cooperation agreement, Mecarmand is jointly developing a large robot model that fully integrates vision, voice and language abilities. This model will enable robots to perceive and understand various signals in the environment and communicate with humans through natural language. The research and development results can greatly improve the intelligent level of robots and make them better cooperate and interact with humans naturally. Through the joint efforts of Mecarmand and Academician Zhang Jianwei Laboratory of Hamburg University, we will work together to break through the bottleneck of robot technology and bring more advanced and innovative solutions to industrial robots.

  Multi-modal large model of Mecarmand robot