logo
ResearchBunny Logo
Towards artificial general intelligence via a multimodal foundation model

Computer Science

Towards artificial general intelligence via a multimodal foundation model

N. Fei, Z. Lu, et al.

This groundbreaking research introduces BriVL, a multimodal foundation model that excels in understanding and imagination across various cognitive tasks. Conducted by Nanyi Fei and colleagues, this study represents a significant leap towards Artificial General Intelligence (AGI).

00:00
00:00
~3 min • Beginner • English
Abstract
The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human. Despite tremendous success in the AI research, most of existing methods have only single-cognitive ability. To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks. To achieve this goal, we propose to pre-train our foundation model by self-supervised learning with weak semantic correlation data crawled from the Internet and show that promising results can be obtained on a wide range of downstream tasks. Particularly, with the developed model-interpretability tools, we demonstrate that strong imagination ability is now possessed by our foundation model. We believe that our work makes a transformative stride towards AGI, from our common practice of "weak or narrow AI" to that of "strong or generalized AI".
Publisher
Nature Communications
Published On
Jun 02, 2022
Authors
Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, Ji-Rong Wen
Tags
multimodal model
image-text pairs
cognitive tasks
Artificial General Intelligence
cross-modal understanding
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny