Foundations and Applications in Large-scale AI Models
-Pre-training, Fine-tuning, and Prompt-based Learning
Workshop held in conjunction with KDD 2023
Workshop held in conjunction with KDD 2023
Deep learning techniques have advanced rapidly in recent years, leading to significant progress in pre-trained and fine-tuned large-scale AI models. For example, in the natural language processing domain, the traditional "pre-train, fine-tune" paradigm is shifting towards the "pre-train, prompt, and predict" paradigm, which has achieved great success on many tasks across different application domains such as ChatGPT/BARD for Conversational AI and P5 for a unified recommendation system. Moreover, there has been a growing interest in models that combine vision and language modalities (vision-language models) which are applied to tasks like Visual Captioning/Generation.
Considering the recent technological revolution, it is essential to have a workshop at the KDD conference that emphasizes these paradigm shifts and highlights the paradigms with the potential to solve different tasks. This workshop will provide a platform for academic and industrial researchers to showcase their latest work, share research ideas, discuss various challenges, and identify areas where further research is needed in pre-training, fine-tuning, and prompt-learning methods for large-scale AI models. The workshop will also foster the development of a strong research community focused on solving challenges related to large-scale AI models, providing superior and impactful strategies that can change people’s lives in the future.
We invite submissions of long (eight papers) and short (four pages) papers, representing original research, preliminary research results, and proposals for new work in academia or industry. All submissions will be single-blind and will be peer-reviewed by an international program committee of researchers and industrial professionals and experts. Accepted submissions will be required to be presented at the workshop and will be published in a dedicated workshop proceeding by the workshop organisers.
Topics of interest in this workshop include but are not limited to:
Pre-training:
- Improvements in pre-training: supervised pre-training, self-supervised pre-training with various auxiliary tasks, meta-learning, prompt-based Learning, multi-modal pre-training etc.
- Novel pre-training methods to maximize generalization
- Model selection for pre-trained models
- Pre-training for various application domains, such as computer vision, natural language processing, robotics, etc
Fine-tuning:
- Domain/task adaptive fine-tuning
- Intermediate-task, multi-task, self-supervised, MLM fine-tuning
- Parameter-efficient fine-tuning: sparse parameter tuning, pruning
- Text-to-Text, Text-to-image, Image-to-text, multi-modal fine-tuning, effectively using large autoregressive pre-trained models
- Fine-tuning for various application domains, such as computer vision, natural language processing, robotics, etc
Prompted/Instruction-based:
- Manual Template Engineering
- Automated Template Learning
- Multi-Prompt Learning; Multi-tasks instruction tuning
- Instruction tuning with HF/RLHF
- chain-of-thought (CoT) prompting
Performance:
- Model compression techniques
- Large-scale model deployments
- Efficient and effective training/inference
- Empirical analysis of various pre-training and fine-tuning methods
- Generalization bounds of different pre-training and fine-tuning methods
- Stability, sparsity and robustness strategies
Downstream tasks of large-scale models:
- NLP models for Text Generation,Text Summarization,Question Answering and other downstream tasks
-CV models for Image Captioning, Semantic Segmentation,Object Tracking and other downstream tasks
Applications powered by large-scale models:
-Conversational AI, Conversational Chatbots
- Enhanced Web Search, Search Engine
- Unified, Personalized next generation recommender systems
Paper Submission Deadline: June 16, 2023, 11:59 PM AoE.
Paper Notification: Jun. 26, 2023, 11:59 PM AoE.
Camera Ready Version: July. 15, 2023, 11:59 PM AoE.
Half-Day Workshop: Aug. 7, 2023
This workshop follows the submission requirement by KDD.
Instructions:
- Long paper (up to 8 pages) and short paper (up to 4 pages). The page limit includes the bibliography and any possible appendices.
- Single-blind peer review
- All papers must be formatted according to ACM sigconf template manuscript style, following the submission guidelines available at: https://www.acm.org/publications/proceedings-template.
- Papers should be submitted in PDF format, electronically, using the EasyChair submission system
- All selected papers will invited for presentation.
Inquiry Email: llmai.workshop@gmail.com
- Retrieval-Augmented Multimodal Language Modeling by Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer and Wen-Tau Yih
- AutoHint: Automatic Prompt Optimization with Hint Generation by Hong Sun, Xue Li, Yinchuan Xu, Youkow Homma, Qi Cao, Min Wu, Jian Jiao and Denis Charles
- Text-to-Video: a Two-stage Framework for Zero-shot Identity-agnostic Talking-head Generation by Zhichao Wang, Mengyu Dai and Keld Lundgaard
- Compositional Prompting with Successive Decomposition for Multimodal Language Models by Long Hoang Dang, Thao Minh Le, Tu Minh Phuong and Truyen Tran
- Dr. LLaMA: Improving Small Language Models on PubMedQA via Generative Data Augmentation,Zhen Guo, Yanwei Wang, Peiqi Wang and Shangdi Yu.
- In-Context Learning User Simulators for Task-Oriented Dialog Systems by Silvia Terragni, Modestas Filipavicius, Nghia Khau, Bruna Guedes, André Manso and Roland Mathis
- Challenges in post-training quantization of Vision Transformers by Piotr Kluska, Florian Scheidegger, A. Cristano I. Malossi and Enrique S. Quintana-Ortí
-Extractive Summarization via ChatGPT for Faithful Summary Generation by Haopeng Zhang, Xiao Liu and Jiawei Zhang
- Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on Graph Diffusion by Haotian Ju, Dongyue Li, Aneesh Sharma and Hongyang Zhang
Time | Speaker | Title |
---|---|---|
8:00-8:10AM, 2023/08/07 (PDT) | Host Chair | Welcome and Open Remarks |
8:10-8:40AM, 2023/08/07 (PDT) | Ed Chi [Google] | Talk 1: |
8:40-9:10AM, 2023/08/07 (PDT) | Tania Bedrax-Weiss [Google] | Talk 2: Large-scale AI Model Research at Google Pre-training, Fine-tuning, and Prompt-based Learning |
9:10-9:25AM, 2023/08/07 (PDT) | Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer and Wen-Tau Yih | Paper-1: Retrieval-Augmented Multimodal Language Modeling |
9:25-9:40AM, 2023/08/07 (PDT) | Silvia Terragni, Modestas Filipavicius, Nghia Khau, Bruna Guedes, André Manso and Roland Mathis | Paper-2: In-Context Learning User Simulators for Task-Oriented Dialog Systems |
9:40-9:55AM, 2023/08/07 (PDT) | Piotr Kluska, Florian Scheidegger, A. Cristano I. Malossi and Enrique S. Quintana-Ortí | Paper-3 : Challenges in post-training quantization of Vision Transformers/td> |
9:55-10:10AM, 2023/08/07 (PDT) | Haotian Ju, Dongyue Li, Aneesh Sharma and Hongyang Zhang | Paper-4 : Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on Graph Diffusion/td> |
10:10-10:30AM, 2023/08/07 (PDT) | Coffee Break | |
10:30-11:00AM, 2023/08/07 (PDT) | Shafiq Joty [Salesforce] | Talk 3: NLP Research in the Era of LLMs |
11:00-11:30AM, 2023/08/07 (PDT) | YiKang Shen[IBM] | Talk 4: Modular Large Language Model and Principle-Driven alignment with Minimal Human Supervision |
11:30 - 11:40AM, 2023/08/07 (PDT) | Hong Sun, Xue Li, Yinchuan Xu, Youkow Homma, Qi Cao, Min Wu, Jian Jiao and Denis Charles | Paper-5: AutoHint: Automatic Prompt Optimization with Hint Generation |
11:40-11:50AM, 2023/08/07 (PDT) | Zhichao Wang, Mengyu Dai and Keld Lundgaard | Paper-6: Text-to-Video: a Two-stage Framework for Zero-shot Identity-agnostic Talking-head Generation |
11:50-12:00PM, 2023/08/07 (PDT) | Long Hoang Dang, Thao Minh Le, Tu Minh Phuong and Truyen Tran | Paper-7: Compositional Prompting with Successive Decomposition for Multimodal Language Models |
12:00PM-12:10PM, 2023/08/07 (PDT) | Zhen Guo, Yanwei Wang, Peiqi Wang and Shangdi Yu | Paper-8: Dr. LLaMA: Improving Small Language Models on PubMedQA via Generative Data Augmentation | 12:10-12:20PM, 2023/08/07 (PDT) | Haopeng Zhang, Xiao Liu and Jiawei Zhang | Paper-9 : Extractive Summarization via ChatGPT for Faithful Summary Generation |
12:20 - 12:30PM, 2023/08/07 (PDT) | Closing Remarks |
Chidansh Bhatt, IBM RESEARCH
Wang-Cheng Kang, Google
Ruoxi Wang, Google
Hima Patel, IBM RESEARCH
Abhishek Malvankar, IBM RESEARCH
Jianmo Ni , Google
Abby Xianjing , Appel
Mengyu Dai, Salesforce
Zhichao Wang , Georgia Institute of Technology
XueLi, Microsoft
Sandeep Singh Sandha, Abacus.AI
Sahisnu Mazumder, Intel Labs