Skip to content

Double image #10144

Closed
Closed
@yolk23321

Description

Checklist

  • I have searched the existing issues for similar issues.
  • I added a very descriptive title to this issue.
  • I have provided sufficient information below to help reproduce this issue.

Summary

image

Reproducible Code Example

import streamlit as st
from openai import OpenAI

# st.title("Hello World")
#
# st.write("""
# ### My First App
# Hello World!
# 你吃饭了吗
# """)

# 定义左侧栏
with st.sidebar:
    st.markdown(f"""
    <center>
    <img  src="https://vip.helloimg.com/i/2024/07/02/66841f6f4a3a5.png" width='100' />
    </center>
    <h1> MoBot </h1>
    """, unsafe_allow_html=True)

    # api_key = st.text_input("API Key", "")
    # 角色定义输入框 System Message
    system_message = st.text_area("角色定义", "你是一个能帮助用户的 AI 助手。")
    # 创造力调节 Temperature
    temperature = st.slider('创造力调节', min_value=0.0, max_value=2.0, value=1.0, step=0.1, help='值越大越具有创造力',
                            format='%0.1f')

# 定义右边的 chatbot 对话窗口标题
st.title("AI 聊天机器人")

# 和 openai 交互的消息列表
messagesHistory = []

# 使用 session_state 初始对话记录
# 初始化界面的聊天内容
if "messages" not in st.session_state:
    st.session_state.messages = [
        {
            # role 用来显示图标,assistant:机器人,user:人
            "role": "assistant",
            # content 内容
            "content": "Hi, 我是 Mobot ~ 很高兴遇见你!有问必答,专注于懂你的 AI"
        },
        # {
        #     "role": "user",
        #     "content": "你好"
        # }
    ]
    # 创建新的对话窗口
    messagesHistory = []

# 将内容显示出来
for message in st.session_state.messages:
    with st.chat_message(message['role']):
        st.markdown(message['content'])

client = OpenAI(
    api_key="xxxxxxx",
    base_url="https://api.openai.com/v1"
)


def chat(prompt, temperature=1):
    if system_message:
        messagesHistory.append({
            "role": "system",
            "content": system_message
        })

    messagesHistory.append({
        "role": "user",
        "content": prompt
    })

    # OpenAI 调用
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messagesHistory,
        temperature=temperature,
        # 开启流式输出
        stream=True
    )

    return response


# 处理用户输入
user_query = st.chat_input("说点什么...")
if user_query:
    # 显示用户输入的内容到聊天窗口
    with st.chat_message("user"):
        st.write(user_query)
    st.session_state.messages.append(
        {
            "role": "user",
            "content": user_query
        }
    )
    with st.chat_message("assistant"):
        # 显示旋转图标
        with st.spinner(""):
            # 调用大模型拿到响应(流式)
            response = chat(user_query, temperature)
            # 创建显示消息的容器
            message_placeholder = st.empty()
            message_placeholder.markdown("")
            # AI 的答案
            ai_response = ""
            for chunk in response:
                # 换种更简洁的方式
                chunk_message = chunk.choices[0].delta.content if chunk.choices and chunk.choices[
                    0].delta.content else ''
                # 组建完整答案
                ai_response += chunk_message
                # 在聊天窗口输出 AI 答案
                message_placeholder.markdown(ai_response + "▌")
            # 将 ai 的答案记录到聊天列表
            st.session_state.messages.append({
                "role": "assistant",
                "content": ai_response
            })

Steps To Reproduce

No response

Expected Behavior

No response

Current Behavior

No response

Is this a regression?

  • Yes, this used to work in a previous version.

Debug info

  • Streamlit version:
  • Python version:
  • Operating System:
  • Browser:

Additional Information

No response

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions