Tutorials 

Tutorial # 1 : Let’s make teaching materials for economic mathematics using ChatGPT

  • Presenters: Prof Yukari Shirota, Prof Basabi Chakraborty, and Dr Anna Kuwana.
  • Keywords:Generative AI, Large-scale language models, Symbolic processing AI, Deductive reasoning solution methods, Economics mathematics
  • Target audience for the tutorial: The target audience includes those interested in having ChatGPT solve math problems and university instructors teaching mathematics.

A brief introduction of the tutorial : 

In this research, we have explored the feasibility of using ChatGPT to solve economics and mathematics word problems and generate educational graphics as teaching materials. ChatGPT, an AI-based on statistical methods, (an AI-based interactive text understanding and generation tool) also leverages deductive reasoning capabilities derived from large-scale language model training to solve economics and mathematics problems by combining formulas deductively. This capability is further enhanced by incorporating AI for symbolic processing, such as Wolfram Cloud, allowing ChatGPT to provide perfect solutions to fundamental word problems without assistance. Additionally, ChatGPT can illustrate the deductive reasoning process as a graph, enabling the efficient generation of visual teaching aids. During the tutorial lecture, we presented how ChatGPT solves problems involving Lagrange multipliers, such as maximizing a production function subject to a cost constraint. ChatGPT can completely solve and graph the deductive reasoning process for bond pricing and fixed-rate housing loan problems in financial mathematics. We use these resultant graphs in our economics math courses at Gakushuin University (see [link]( https://www-cc.gakushuin.ac.jp/~20010570//kakenC2024/index.html).
In the latter half of the tutorial, participants can let ChatGPT solve problems and draw deduction reasoning plan graphs. The necessary items for this are:

  1. ChatGPT-4
  2. Python Jupyter Notebook with the Graphviz package (or Google Colaboratory)

 The math problems to be solved are (1) tiny sample problem (velocity of a car), (2) Lagrange multiplier problem (cost minimum problem), and (3) Housing loan problem with variable interest rates (0.5% to 1.8%). 

Recommended prompts will be provided as tutorial materials. If participants cannot have ChatGPT solve the math problem, they can run the pre-prepared Python Graphviz program to experience creating deductive reasoning graphs (see the short demo video).
[link] https://www-cc.gakushuin.ac.jp/~20010570//kakenC2024/contents/tinydemo.mp4

In the first half, we will conduct a lecture, and in the second half, participants who wish to do so can experiment with creating the deductive reasoning graphs in the workshop.

Presenter :  Prof Yukari SHIROTA (Professor of Gakushuin University)

 Prof Yukari SHIROTA (Professor of Gakushuin University) graduated from the Department of Information Science, Faculty of Science, the University of Tokyo, and then received a D.Sc. in computer science in 1998. As a researcher in the private sector, she conducted research for 13 years and then in 2001 she was involved in Faculty of Economics, Gakushuin University, Tokyo as Associate Professor. In 2002, she became a Professor, Faculty of Economics at Gakushuin University. In 2006 to 2007, she stayed at University of Oxford, Oxford, UK as an academic visitor. She is a Fellow of the Information Processing Society of Japan, a Board Member of the Japan Society of Business Mathematics, and a Board Member of the Japanese Operations Management and Strategy Association. Research fields are industry analysis by AI, data visualization on the web, social media analysis, and visual education methods for business mathematics. She has read the paper in the top conference of the “AI in Finance” field: “An Analysis of Political Turmoil Effects on Stock Prices – a case study of US-China trade friction –“  (ACM AI in Finance 2020). She organized the special session titled “Awareness Technology for Economic and Social Data Analysis” in IEEE iCAST in 2019 and 2020, so that they can discuss the economics/social themes with the latest machine learning technologies. The latest tutorials in English are:

  • Y. Shirota, "Analysis of Economic Data Using Optimization of Bending Energy of the Statistical Shape," 2023 1st International Conference on Optimization Techniques for Learning (ICOTL), IEEE, Bengaluru, India, 2023, pp. 1-5, doi: 10.1109/ICOTL59758.2023.10435023. 
  • Y. Shirota, and B. Chakraborty, TUTORIAL T1: Theoretical Explanation and Case Studies of Shapley Values in Machine Learning Regression, in International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA). 2023, International Academy, Research, and Industry Association (IARIA) XPS Press: Barcelona. https://www.iaria.org/conferences2023/TutorialsDBKDA23.html
  • Y. Shirota, “SHAP Workshop” in Indonesia University to be held on Sept. 6th, 2024, www-cc.gakushuin.ac.jp/~20010570/WS_UI2024/SHAPvideoShirota.mp4

Presenter :  Basabi Chakraborty

 Basabi Chakraborty received B.Tech, M.Tech and Ph. D degrees in RadioPhysics and Electronics from Calcutta University, India and worked in Indian Statistical Institute, Calcutta, India until 1990. From 1991 to 1993 she worked as a part time researcher in Advanced Intelligent Communication Systems Laboratory in Sendai, Japan. She received Ph.D in Information Science from Tohoku University, Japan in 1996 and worked there as a postdoctoral research fellow until 1998. She joined as a Faculty in the department of Software and Information Science, Iwate Prefectural University, Japan in 1998 and served as Professor and Head of Pattern Recognition and Machine Learning laboratory until her retirement in March, 2022.  She served as a visiting faculty in Dept. of Electrical and Computer Engineering, University of Western Ontario, Canada (Oct. 2006 –March 2007) Currently she is a distinguished Professor and Professor Emeritus in Iwate Prefectural University. She also holds the position of Dean and Distinguished Professor in School of Computing, Madanapalle Institute of Technology and Science, A.P, India. Her main research interests are in the area of Pattern Recognition, Machine Learning, Soft Computing Techniques, Biometrics, Data Mining and Social Media Data Mining. She is a senior life member of IEEE, member of ACM, Japanese Neural Network Society (JNNS), Japanese Society of Artificial Intelligence (JSAI), and Executive committee member of ISAJ (Indian Scientists Association in Japan). She is an active member of IEEE WIE affinity group, held the positions of WIE JC chair (2010-2011), founding chair of Sendai WIE (2017-1018), R10 IEEE WIE Committee Member (2019-2020), R10 IEEE SPINIC and ARC Committee Member (2021-  ). She is also secretary of IEEE Sendai LM affinity group (2024-2025)

Presenter :  Anna Kuwana

 Anna Kuwana graduated from the Department of Computer Science, Faculty of Science, Ochanomizu University in March 2006, and completed the master's program at the same graduate school in September 2007. She received her Ph. D the same graduate school in September 2011. After working as a lecturer of IT Center at Ochanomizu University, she became an assistant professor at the Division of Electronics and Informatics, Faculty of Science and Technology, Gunma University. Since 2023, she has been teaching computers and mathematics as an associate professor at the University Education Center at Wayo Women's University. In parallel, she has been in charge of exercises in the economic mathematics lectures given by Professor Shirota at the Faculty of Economics, Gakushuin University since 2021. She has a deep understanding of economic and financial mathematics and where students who are not good at mathematics make mistakes.

 

Tutorial # 2 : An Introduction to Blockchain as a Database, WEB3.0, and Decentralized AI (BCaDBW3AI)

Overview: 

 Recording, Storage and Exchange of trusted information/knowledge are vital in many application domains. Blockchain is a foundational innovation for keeping temper proof (trusted) data in a permanent, immutable, fully replicated, global, and trustless ledger. It allows people, organizations, and machines to digitize their current relationships as well as forming new secure digital ones since data is disclosed, secured, and recorded securely in a blockchain database system. Moreover, new advances in WEB3.0 are rapidly taking place where individuals, organizations and machines are being empowered in a decentralized system of digital identity and trust in new services and products. There is also explosive progress in generative AI such as Large Language Models (LLMs), Intelligent Agents, Image recognition, etc. that will accelerate  in the years ahead. The tutorial covers the blockchains as a database with its fundamentals, and its evolution to WEB3.0 and its augmentation with AI such as LLMs, decentralization in storage and computing. It tentatively includes: Cryptographic fundamentals, Blockchains as a database, Bitcoin blockchain, Temper proof data and trust, Consensus protocols, Smart contracts, Decentralized Organizations (DAOs), Cryptocurrencies and money, permissioned and permissionless blockchains, WEB3.0 history, WEB3.0 governance structures, Blockchain oracles and bridges, WEB3.0 applications: DeFi, Decentralized identity, Generative AI and Large Language Models, Representation of semantics and context, Intelligent Agents, Intelligent Agents and smart contracts, Intelligent Agents and RAG, AI and WEB3.0, Limitations and future of WEB3.0, AI and Intelligent Agents.

Outcomes: 

Tutorial topics are timely and relevant for the researchers and practitioners in many application domains. Especially, researchers and designers of AI applications benefit from the decentralization, trust and data sharing provided by the blockchain databases and WEB3.0 applications. Tutorial attendees are expected to develop for research, teaching, or practice, a deeper understanding and appreciation of:

  • The blockchain as a foundational innovation and its fundamentals,
  • Security and trustfulness of the data for the efficient and effective functioning of the digital world, 
  • Keeping temper proof (trusted) data in a permanent, immutable, global, and trustless ledger,
  • Transition from WEB2.0 to WEB3.0,
  • Web3.0 and its architecture for various application domains,
  • A new type of organization: DAOs,
  • DAOs, Governance, Digital Identity, Blockchain Oracles and Bridges, and incentivization in WEB3.0,
  • WEB3.0 applications: DeFi, Decentralized Identity, Agents, etc.,
  • Large Language Models, Structured Knowledge and Retrieval Augmented Generation,
  • Articulation of Intelligent Agents and, and Intelligent Agents in WEB3.0,
  • Future of Intelligent Agents and WEB3.0.

Prerequisites

No prerequisites – Exposure to basic concepts of computer science are useful.

Presenter: Abdullah Uz Tansel

 Abdullah Uz Tansel received his BS in management, and his MS and PhD degrees in computer science from the Middle East Technical University, in Ankara Turkey. He has also received his MBA degree at the University of Southern California. After being a faculty member at the Middle East Technical University, Dr. Tansel joined Baruch College, the City University of New York (CUNY) where he is currently a professor of information systems and a professor of computer science at The Graduate Center of CUNY. Professor Tansel’s research focus is on temporal databases, and he has made significant contributions in this field. He also headed the editorial board that published the first book on temporal databases ‘Temporal Databases: Theory, Design, and Implementation’ (1993). Dr. Tansel has a patent on adding temporality to RDF. His research interests are Database Management Systems, Temporal Databases, Semantic Web, Blockchain Databases and WEB3.0, and Generative AI. Dr. Tansel has published many articles in the conferences and journals of the ACM, IEEE, and other professional associations. He is a frequent speaker on time in databases and blockchain as a database and WEB3.0. Dr. Tansel is also a member of the ACM and the IEEE Computer Society.


Tutorial # 3 : Overview of Domain-Specific RAG Enhanced with Agent and Multi-Agent for Customer Service Excellence Tutorials

 

 

 

 

 

 

 

 

Overview : 

In this tutorial, participants will dive into the transformative capabilities of Large Language Models (LLMs) and their significant impact on customer service. The session will start by examining the inherent challenges LLMs face, including issues like hallucinations and the critical need for explainability. To address these challenges, we will explore how Retrieval-Augmented Generation (RAG) methods enhance the reliability and transparency of LLMs, making them invaluable tools in specialized domains. Through a combination of theoretical discussions and hands-on exercises, attendees will gain the knowledge needed to develop domain-specific LLMs tailored to various industry needs.

Furthermore, the tutorial will cover the development and configuration of intelligent agents and dynamic multi-agent systems, showcasing how these systems can effectively cross-verify information to mitigate hallucinations. Each agent will be designed to specialize in specific functions, collaborating to deliver comprehensive and reliable outputs in customer support scenarios. Whether participants are new to AI or seasoned practitioners, they will be equipped with practical skills to deploy and refine LLMs effectively.

By the end of this session, attendees will not only understand the intricacies of LLMs but also acquire practical skills in creating robust AI solutions tailored to industry-specific applications, particularly in enhancing customer service operations. The tutorial aims to empower AI researchers, practitioners, and industry professionals with streamlined approaches to harnessing the transformative capabilities of AI.

Outcomes : 

By the end of this tutorial, participants will be able to:

  • Understand the challenges faced by Large Language Models (LLMs), including hallucinations and the need for explainability.
  • Apply Retrieval-Augmented Generation (RAG) methods to enhance the reliability and transparency of LLMs in specialized domains.
  • Develop domain-specific LLMs tailored to industry-specific needs, particularly in customer service.
  • Create and configure intelligent agents within dynamic multi-agent systems to cross-verify information and reduce hallucinations.
  • Implement robust AI solutions that integrate LLMs and intelligent agents effectively into professional practices, improving automation and support in various applications.

Prerequisites : 

 To get the most out of this tutorial, participants should have a foundational understanding of Python programming, as well as concepts in machine learning and deep learning. Familiarity with Natural Language Processing (NLP) is also essential, alongside basic knowledge of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). This prerequisite knowledge will ensure that all attendees can engage effectively with the material presented and participate in the hands-on exercises.

Biography of Each Tutorial Presenters : 

Edwin Simjaya’s Biography

 Edwin Simjaya is an AI expert and currently serves as the Head of AI & Software Center. With over 15 years of experience in software engineering and notable achievements in the field, Edwin has established himself as a professional in AI. His academic journey includes post-graduate studies in Mathematics at the University of Indonesia and an undergraduate degree in Computer Science from the University of Pelita Harapan.

Edwin's expertise has led to groundbreaking contributions. Notably, he led the implementation of an AI Augmented Nutrigenomic Algorithm utilizing Large Language Models (LLM), revolutionizing the field. Edwin also manages Kalbe Digital University content and implementation and has delivered key projects for internal Kalbe. 

In addition to his corporate achievements, Edwin is a frequent speaker at industry events, sharing his expertise as a keynote speaker at various bioinformatics conferences and other internal and external forums. He also served as a tutorial presenter at PRICAI 2023, further cementing his role as a leader in AI innovation and education.

Adhi Setiawan’s Biography

 Adhi Setiawan is an Artificial Intelligence Engineer at Kalbe Digital Lab, specializing in Reinforcement Learning and Computer Vision. He holds a Bachelor of Computer Science from the University of Brawijaya. Adhi's research contributions have significantly impacted the field of AI, and he has authored papers such as "Large Scale Pest Classification Using Efficient Convolutional Neural Network with Augmentation and Regularizers." He has actively researched and developed AI projects across various domains, including agriculture, smart cities, distribution logistics, and healthcare. 

Beyond his research pursuits, Adhi is involved in teaching and mentoring. He served as a Teaching Assistant at the University of Brawijaya in 2020 and has advised on various Artificial Intelligence projects within Kalbe's internal business unit. His dedication to the AI community is evident through his contributions to the Jakarta Artificial Intelligence Research, where he actively participates and shares his expertise. Adhi also served as a tutorial presenter at PRICAI 2023, showcasing his commitment to advancing knowledge in the field. 

 

Tutorial # 4 : Synthetic Data Generation through Adaptive Diffusion Models Tutorials

 

 

 

 

 

 

 

 

Overview : 

 In today’s data-driven world, the ability to generate high-quality synthetic data has become a fundamental skill in advancing machine learning and AI applications. Synthetic data is increasingly used for tasks like data augmentation, privacy-preserving machine learning, and training models in environments where real data is limited or sensitive. This tutorial focuses on diffusion models, which have gained prominence as a more versatile and effective alternative to traditional generative approaches like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders). Diffusion models have proven to be powerful tools in generating synthetic data that is not only diverse but also exhibits high fidelity, making them indispensable in modern AI workflows.

The tutorial will guide participants through a structured learning process, beginning with a theoretical introduction to diffusion models, explaining their core principles and highlighting their advantages over GANs and VAEs. Participants will understand how these models function by iteratively refining noisy data into meaningful outputs. The session will then progress to practical aspects, where attendees will explore how to fine-tune diffusion models for specific data generation tasks. This fine-tuning process is key to ensuring that the generated synthetic data is tailored to meet the needs of particular applications, whether it be for enhancing existing datasets or generating entirely new and diverse datasets.

The tutorial will emphasize the critical steps of data preparation and preprocessing, ensuring that participants understand how these stages impact the quality of the generated synthetic data. In addition to hands-on demonstrations using PyTorch, participants will have the opportunity to implement diffusion models in real-world scenarios, gaining practical skills they can apply to their own projects. Moreover, the session will explain some of the limitations of diffusion models, including computational complexity, training time, and scenarios where they may not perform as effectively as other generative models. By understanding both the strengths and limitations of diffusion models, attendees will be better equipped to make informed decisions about when and how to use them for synthetic data generation in various AI applications.

Outcomes : 

By the end of the tutorial, participants will have achieved the following outcomes:

  • Comprehensive Understanding: A strong grasp of the theory behind diffusion models, including their advantages over other generative models like GANs and VAEs.
  • Hands-on Skills: Practical experience in implementing and fine-tuning diffusion models using PyTorch to generate synthetic data tailored to specific requirements.
  • Data Generation Expertise: Confidence in preparing and preprocessing datasets to ensure the generation of high-quality synthetic data.
  • Strategic Insight: A deeper appreciation of the importance of mastering synthetic data generation, particularly in data augmentation, privacy-preserving machine learning, and AI model development.
  • Awareness of Limitations: Knowledge of the limitations of diffusion models and areas for future research or improvement.

Prerequisites : 

 Participants should have a solid foundation in Python programming, as the hands-on exercises will use tools like PyTorch. Familiarity with machine learning and deep learning concepts, including neural networks and model training, is essential. Additionally, a basic understanding of generative models such as GANs and VAEs will help in grasping the advantages of diffusion models presented in the tutorial. This foundational knowledge will enable participants to fully engage with both the theoretical and practical aspects of the session.

Biography of Each Tutorial Presenters : 

Edwin Simjaya’s Biography

Edwin Simjaya is an AI expert and currently serves as the Head of AI & Software Center. With over 15 years of experience in software engineering and notable achievements in the field, Edwin has established himself as a professional in AI. His academic journey includes post-graduate studies in Mathematics at the University of Indonesia and an undergraduate degree in Computer Science from the University of Pelita Harapan.

Edwin's expertise has led to groundbreaking contributions. Notably, he led the implementation of an AI Augmented Nutrigenomic Algorithm utilizing Large Language Models (LLM), revolutionizing the field. Edwin also manages Kalbe Digital University content and implementation and has delivered key projects for internal Kalbe. 

In addition to his corporate achievements, Edwin is a frequent speaker at industry events, sharing his expertise as a keynote speaker at various bioinformatics conferences and other internal and external forums. He also served as a tutorial presenter at PRICAI 2023, further cementing his role as a leader in AI innovation and education.

Adhi Setiawan’s Biography

 Adhi Setiawan is an Artificial Intelligence Engineer at Kalbe Digital Lab, specializing in Reinforcement Learning and Computer Vision. He holds a Bachelor of Computer Science from the University of Brawijaya. Adhi's research contributions have significantly impacted the field of AI, and he has authored papers such as "Large Scale Pest Classification Using Efficient Convolutional Neural Network with Augmentation and Regularizers." He has actively researched and developed AI projects across various domains, including agriculture, smart cities, distribution logistics, and healthcare. 

Beyond his research pursuits, Adhi is involved in teaching and mentoring. He served as a Teaching Assistant at the University of Brawijaya in 2020 and has advised on various Artificial Intelligence projects within Kalbe's internal business unit. His dedication to the AI community is evident through his contributions to the Jakarta Artificial Intelligence Research, where he actively participates and shares his expertise. Adhi also served as a tutorial presenter at PRICAI 2023, showcasing his commitment to advancing knowledge in the field. 

 

Tutorial # 5 : Machine Learning for Streaming Data

Overview : 

 Machine learning for data streams (MLDS) attempts to extract knowledge from a stream of non-IID data. It has been a significant research area since the late 1990s, with increasing adoption in the industry over the past few years due to the emergence of Industry 4.0, where more industry processes are monitored online. Practitioners are presented with challenges such as detecting and adapting to concept drifts, continuously evolving models, and learning from unlabeled data.

Despite commendable efforts in open-source libraries, a gap persists between pioneering research and accessible tools, presenting challenges for practitioners, including experienced data scientists, in implementing and evaluating methods in this complex domain. Our tutorial addresses this gap with a dual focus. We discuss advanced research topics, such as unlabeled data streams, while providing practical demonstrations of their implementation and assessment using Python. By catering to both researchers and practitioners, this tutorial aims to empower users in designing, conducting experiments, and extending existing methodologies.

Outcomes : 

 In this tutorial, our objective is to familiarize attendees with applying diverse machine-learning tasks to streaming data. Beyond an introductory overview, where we delineate the learning cycle of typical supervised learning tasks, we steer our focus towards pertinent challenges seldom addressed in conventional tutorials, such as:

  • Prediction Intervals for regression Tasks;
  • Concept drift detection, visualisation and evaluation
  • The idiosyncrasies of applying and evaluating clustering on a data stream

Prerequisites : 

This tutorial's target audience includes researchers and practitioners, especially those interested in learning from data streams, evolving data, and/or IoT applications.

No previous experience in machine learning for data streams is required, but familiarity with traditional machine learning concepts and frameworks (like Scikit-Learn) is expected.

Presenter: Yibin Sun

 Yibin is currently pursuing his Ph.D. in Computer Science and Artificial Intelligence at the University of Waikato. His research focuses on Advanced Streaming Algorithms. Yibin has delivered guest lectures and talks at the University of Waikato’s Data Stream Mining (COMPX523 Masters) course, Cardiff University’s Machine Learning Seminar, etc.

Yibin has contributed to the field by developing the Self-Optimising K Nearest Leaves streaming regression algorithm and Adaptive Prediction Intervals, as well as by producing novel, valid datasets for the machine learning community. His research also explores the implementation of advanced, high-performance algorithms.

Yibin’s work has been featured in prestigious publications such as PRICAI, Data Mining and Knowledge Discovery, and PAKDD. Additionally, he actively contributes to and maintains the MOA (Massive Online Analysis) Stream Learning Platform and the CapyMOA Stream Learning Platform.

 

Tutorial # 6 : Generative AI in Education (GAIED)

Overview : 

 The “Generative AI in Education” (GAIED) tutorial, presented by Professor Shinobu Hasegawa, aims to explore the transformative potential of Generative AI in higher education. This half-day, on-site tutorial will provide participants with practical insights into the applications of Generative AI for teaching, learning, and research, while also addressing ethical considerations. The tutorial begins with an introduction to the fundamental concepts and technologies of Generative AI. It then delves into how Generative AI can enhance teaching by improving lectures and engaging students through interactive AI tools. For learning support, the tutorial covers personalized learning experiences, adaptive learning platforms, and AI-based assessment and feedback mechanisms. In the research support segment, participants will learn how Generative AI can boost research productivity, assist in literature reviews and data analysis. Ethical considerations are a crucial part of the tutorial, focusing on ethical frameworks, mitigating risks, ensuring responsible use, and addressing biases to ensure fairness. The session will also feature real-world case studies and interactive demonstrations, providing participants with hands-on experience and practical knowledge.

Outcomes : 

By the end of the tutorial, participants will:

  • Understand the fundamental concepts and technologies of Generative AI.
  • Gain practical insights into the applications of Generative AI in teaching, learning, and research.
  • Learn about ethical considerations and frameworks for the responsible use of Generative AI.
  • Experience real-world examples and interactive demonstrations of Generative AI tools.
  • Develop strategies for integrating Generative AI into their own educational practices.

Prerequisites : 

 It would be better if participants have experience in the use of Generative AI. This background will help them better understand the advanced concepts and applications discussed during the tutorial. Familiarity with educational technologies and research methodologies will also be beneficial for fully engaging with the content. Additionally, a basic understanding of AI principles and their applications in education will enable participants to maximize the benefits of the tutorial. This foundational knowledge will ensure that attendees can actively participate in discussions and practical sessions, and effectively apply the insights gained to their own educational contexts.

Presenter: Shinobu Hasegawa

 Shinobu Hasegawa is currently a director and professor at the Center for Innovative Distance Education and Research at the Japan Advanced Institute of Science and Technology (JAIST). He received his B.S., M.S., and Ph.D. degrees in system science from Osaka University in 1998, 2000, and 2002, respectively. The primary goal of his research is to facilitate “Human Learning and Computer-mediated Interaction” in distributed education. His research field is mainly AI in education and learning technology, which includes support for web-based learning, game-based learning, cognitive skill learning, affective learning, distance learning systems, and community-based learning. He also focuses on how to apply generative AI in higher education.