In the video ‘Text-to-GRAPH w/ LGGM: Generative Graph Models’ by code_your_own_AI, the presenter explores new research on large generative graph models (LGGM). This research, conducted by two universities in collaboration with Adobe and Intel, investigates the potential of diffusion-based technology and commercial applications of text-to-graph functionality. The video explains how LGGM extends the capabilities of large generative models like GPT and stable diffusion to graphs, allowing for the generation of graphs based on text prompts. The model was pre-trained on a corpus of 5,000 graphs from 13 distinct domains, enabling it to generate new graphs with specified properties such as average degree and clustering coefficient. The presenter discusses the differences between graph neural networks (GNNs) and LGGMs, emphasizing that while GNNs focus on encoding graph structures for tasks like node classification and link prediction, LGGMs generate new graph structures based on user-defined criteria. Industrial applications of LGGMs include molecular graph generation, social network analysis, material design, and cybersecurity. The video also touches on the methodology used, the discrete denoising diffusion process, and the multi-domain training approach. The presenter concludes by reflecting on the early stage of this technology and its potential future applications.

code_your_own_AI
Not Applicable
July 7, 2024
Large Graph Generative Models
PT11M55S