Skip to content
View rockmagma02's full-sized avatar
🎯
Focusing
🎯
Focusing

Block or report rockmagma02

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
rockmagma02/README.md

Hi there πŸ‘‹

I'm Ruiyang Sun (/ruΜ―eΙͺΜ― jΙ‘Ε‹ swΙ™n/, or Ryan Sun; ε­™ηΏι˜³). I'm a πŸŽ“ senior undergraduate student pursuing a double major in 🧲 Physics and πŸ€– Artificial Intelligence (AI) at πŸ›οΈ Peking University.

I have previously worked in the areas of πŸ”’ Safe Reinforcement Learning (Safe RL) and πŸ€–πŸ§­ LLM Alignment, under the guidance of Prof. Yaodong Yang at the PKU Pair Lab. Currently, my research is focused on 🌱 Emergent Socio-Dynamic Behavior and 🧩 Alignment Issues in πŸ€–πŸ‘₯ AI-Human Societies, particularly focusing on advanced AI systems such as πŸ—£οΈ Large Language Models (LLMs), πŸ–ΌοΈ Large Multimodal Models (LMMs), and πŸ€–πŸ’Ό LLM-powered Autonomous Agents. I aim for my research to contribute to the safer, more harmonious, and dignified integration of πŸ€– AI systems into πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ human society. Here are some of the key research questions I'm exploring:

  1. When πŸ€– AI systems are treated as human-like entities in AI-human mixed ecosystems, can we observe the 🌟 emergence of human-like socio-dynamic behaviors from these AI systems?

  2. In what ways do the behaviors of πŸ€– AI systems diverge from those of πŸ‘₯ humans, and how can these distinctions inform our understanding of human-AI 🀝 interactions?

  3. How can emergent behaviors (e.g., 🀝 social learning, πŸ€— cooperation) enhance the intelligence of πŸ€– AI systems and the collective 🧠 intelligence of AI-human societies?

  4. What forms of ⚠️ misalignment might arise between πŸ‘₯ human and πŸ€– AI systems in AI-human mixed ecosystems, and what strategies can be used to πŸ› οΈ mitigate these misalignments effectively?

I believe that to develop more πŸ€– intelligent and ethical AI systems, we need to draw insights not only from πŸ’» Computer Science but also from fields like 🧠 Psychology, πŸ‘₯ Sociology, and 🧠 Cognitive Science. I welcome discussions from people with diverse perspectives! 🌍

Besides research, I'm also a passionate 🍎 Apple developer, and I am currently working on creating a new πŸ€– AI-powered teamwork research toolkit. I'd love to discuss or collaborate on this with anyone interested! 🀝

Where you can find me:

Pinned Loading

  1. PKU-Alignment/safe-rlhf PKU-Alignment/safe-rlhf Public

    Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

    Python 1.3k 118

  2. huggingface/transformers huggingface/transformers Public

    πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    Python 134k 26.8k

  3. PKU-Alignment/omnisafe PKU-Alignment/omnisafe Public

    JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.

    Python 927 132

  4. StreamUtilities StreamUtilities Public

    StreamUtilities is a toolbox providing two utilities for working with stream in swift. SyncStream, a class that generates a sequence of values and operates synchronously. BidirectionalStream, a cla…

    Swift 1