Google Developer Group

Yonsei University

25-26 Fourth T19

PreviewPreviewPreviewPreviewPreviewPreviewPreview

Event Time

2025. 9. 30. 19:00

-

25. 09. 30. 21:00

Location

Yonsei University Engineering Hall 2 731

Contents

T19 Session is the main session of GDGoC Yonsei.

  • In the APT Update Session, presenters share insights on the latest technologies and tech trends.
  • In the Tech Session, presenters give talks on technologies of their own interest.

✔️ APT Update

  • ML/AI Member – Yena Kim

Self-Improvement LMM (SILMM) is an innovative framework designed to autonomously solve the problem of compositional alignment. Unlike traditional approaches such as manual prompt engineering, CLIP-based alignment evaluation, or human annotation, SILMM provides a more scalable alternative. It is a self-learning AI that creates its own problems, evaluates them, and iteratively improves through five stages: Generating compositional prompts Producing diverse image candidates Decompositional self-questioning VQA-based self-feedback Self-feedback learning Through this process, SILMM has demonstrated overall performance improvements of 20–100%, with strong potential for further scalability to larger models and broader datasets.

✔️ Tech Session

  • Cloud Member – Jaebaek Hong

Discussed cloud deployment strategies for addressing risks associated with internet exposure. By using CIDR (Classless Inter-Domain Routing), IP addresses can be managed efficiently, while VPC (Virtual Private Cloud) provides a secure and isolated network environment. Public and private subnets are separated for external connection management, while NACLs (Network Access Control Lists) and Security Groups enhance traffic control and security. Tools like NAT Gateways and VPC Endpoints improve performance and reduce costs, while Bastion Hosts enable administrators to safely access private subnets. Altogether, these practices allow VPCs to maintain both high security and operational efficiency.

  • DevRel Member – Seohyun Yang

Explored the concept of Interactive Art, a form of digital art where the audience or participants play a crucial role, with works responding dynamically to changing inputs. Applications extend to showrooms, pop-up stores, VR/AR-based education, and even healthcare through wearables and rehabilitation tools. While earlier implementations required C language and custom hardware with close collaboration between artists and engineers, today tools like TouchDesigner and Unity allow AI-based recognition environments that empower artists to manage the full process—from ideation to realization—more independently. Interactive art highlights that no matter how advanced technology becomes, it brings back deeply human elements—movement, voice, gaze—using cutting-edge tools to create new experiences that begin with the individual.

  • ML/AI Member – Yerin Cho

Fine-tuning refers to the process of training a pre-trained large language model (LLM) on a specific dataset to achieve higher suitability for a given task or domain. However, fine-tuning LLMs presents challenges, such as the “sofa problem,” where the extremely large dimensionality of weights (W) makes directly training ΔW both time-consuming and costly. LoRA (Low-Rank Adaptation) provides an efficient alternative for fine-tuning. Instead of updating all parameters, LoRA freezes the original weights and learns only two small matrices, approximating ΔW with significantly fewer parameters. This drastically reduces parameter counts, storage requirements, and computational overhead—saving both cloud costs and GPU resources. With tools like the PEFT library on Hugging Face, training with LoRA can be done easily. Moreover, its variants such as LoRA+, LoRA-FA, and VeRA strike a balance between performance and efficiency, enabling lighter and more affordable model training.

Contact Us

Google Developer Group

Yonsei University