Clean Code; Refactor README

This commit is contained in:
Timothyxxx
2024-03-27 16:21:49 +08:00
parent ee8e9451b4
commit 26ed70ef70
6 changed files with 128 additions and 91 deletions

0
CONTRIBUTION.md Normal file
View File

View File

@@ -1,10 +1,4 @@
# OSWorld: Open-Ended Tasks in Real Computer Environments
<p align="center">
<img src="desktop_env/assets/icon.jpg" alt="Logo" width="80px">
<br>
<b>SLOGAN</b>
</p>
# OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
<p align="center">
<a href="">Website</a>
@@ -14,74 +8,65 @@
![Overview]()
## Updates
- 2024-03-01: We released our [paper](), [environment code](), [dataset](), and [project page](). Check it out!
- 2024-03-28: We released our [paper](), [environment and benchmark](), and [project page](https://os-world.github.io/). Check it out!
## Install
1. Install VMWare and configure `vmrun` command:
Please refer to [guidance](https://docs.google.com/document/d/1KBdeZwmZs2Vi_Wsnngb3Wf1-RiwMMpXTftwMqP2Ztak/edit#heading=h.uh0x0tkl7fuw)
1. Install VMWare and configure `vmrun` command, and verify by:
```bash
vmrun -T ws list
```
2. Install the environment package, download the examples and the virtual machine image.
For x86_64 Linux or Windows, you can install the environment package and download the examples and the virtual machine image by running the following commands:
For x86_64 CPU Linux or Windows, you can install the environment package and download the examples and the virtual machine image by running the following commands:
Remove the `nogui` parameter if you want to see what happens in the virtual machine.
```bash
git clone https://github.com/xlang-ai/DesktopEnv
cd DesktopEnv
git clone https://github.com/xlang-ai/OSWorld
cd OSWorld
pip install -r requirements.txt
gdown https://drive.google.com/drive/folders/1HX5gcf7UeyR-2UmiA15Q9U-
Wr6E6Gio8 -O Ubuntu --folder
gdown https://drive.google.com/drive/folders/1HX5gcf7UeyR-2UmiA15Q9U-Wr6E6Gio8 -O Ubuntu --folder
vmrun -T ws start "Ubuntu/Ubuntu.vmx" nogui
vmrun -T ws snapshot "Ubuntu/Ubuntu.vmx" "init_state"
```
For Apple-chip macOS, you should install the specially prepared virtual machine image by running the following commands:
```bash
gdown https://drive.google.com/drive/folders/xxx -O Ubuntu --folder
vmrun -T fusion start "Ubuntu/Ubuntu.vmx"
vmrun -T fusion snapshot "Ubuntu/Ubuntu.vmx" "init_state"
```
## Quick Start
Run the following minimal example to interact with the environment:
```python
import json
from desktop_env.envs.desktop_env import DesktopEnv
with open("evaluation_examples/examples/gimp/f723c744-e62c-4ae6-98d1-750d3cd7d79d.json", "r", encoding="utf-8") as f:
example = json.load(f)
example = {
"id": "94d95f96-9699-4208-98ba-3c3119edf9c2",
"instruction": "I want to install Spotify on my current system. Could you please help me?",
"config": [{"type": "execute", "parameters": {"command": ["python","-c","import pyautogui; import time; pyautogui.click(960, 540); time.sleep(0.5);"]}}], "evaluator": {"func": "check_include_exclude", "result": {"type": "vm_command_line","command": "which spotify"}, "expected": {"type": "rule","rules": {"include": ["spotify"], "exclude": ["not found"]}}}
}
env = DesktopEnv(
path_to_vm=r"path_to_vm",
action_space="computer_13",
path_to_vm="Ubuntu/Ubuntu.vmx",
action_space="pyautogui",
task_config=example
)
observation = env.reset()
observation, reward, done, info = env.step({"action_type": "CLICK", "parameters": {"button": "right", "num_clicks": 1}})
obs = env.reset()
obs, reward, done, info = env.step("pyautogui.rightClick()")
```
## Annotation Tool Usage
We provide an annotation tool to help you annotate the examples.
## Run Benchmark
### Run the Baseline Agent
If you want to run the baseline agent we use in our paper, you can run the following command as an example:
```bash
## Agent Usage
We provide a simple agent to interact with the environment. You can use it as a starting point to build your own agent.
```
## Road map of infra (Proposed)
### Run Evaluation of Your Agent
Please first read through the [agent interface](https://github.com/xlang-ai/OSWorld/mm_agents/README.md) and the [environment interface](https://github.com/xlang-ai/OSWorld/desktop_env/README.md).
And implement the agent interface correctly and import you customized one in the `run.py` file.
Then, you can run the following command to evaluate your agent:
- [x] Explore VMWare, and whether it can be connected and control through mouse package
- [x] Explore Windows and MacOS, whether it can be installed
- MacOS is closed source and cannot be legally installed
- Windows is available legally and can be installed
- [x] Build gym-like python interface for controlling the VM
- [x] Recording of actions (mouse movement, click, keyboard) for humans to annotate, and we can replay it and compress it
- [x] Build a simple task, e.g. open a browser, open a website, click on a button, and close the browser
- [x] Set up a pipeline and build agents implementation (zero-shot) for the task
- [x] Start to design on which tasks inside the DesktopENv to focus on, start to wrap up the environment to be public
- [x] Start to annotate the examples for ~~training~~ and testing
- [x] Error handling during file passing and file opening, etc.
- [x] Add accessibility tree from the OS into the observation space
- [x] Add pre-process and post-process action support for benchmarking setup and evaluation
- [ ] Multiprocess support, this can enable the reinforcement learning to be more efficient
- [ ] Experiment logging and visualization system
- [ ] Add more tasks, maybe scale to 300 for v1.0.0, and create a dynamic leaderboard
## Road map of benchmark, tools and resources (Proposed)
- [ ] Improve the annotation tool base on DuckTrack, make it more robust which align on accessibility tree
- [ ] Annotate the steps of doing the task
- [ ] Build a website for the project
- [ ] Crawl all resources we explored from the internet, and make it easy to access
- [ ] Set up ways for community to contribute new examples
## Citation
If you find this environment useful, please consider citing our work:

0
desktop_env/README.md Normal file
View File

65
mm_agents/README.md Normal file
View File

@@ -0,0 +1,65 @@
# Agent
## Prompt-based Agents
### Supported Models
We currently support the following models as the foundation models for the agents:
- `GPT-3.5` (gpt-3.5-turbo-16k, ...)
- `GPT-4` (gpt-4-0125-preview, gpt-4-1106-preview, ...)
- `GPT-4V` (gpt-4-vision-preview, ...)
- `Gemini-Pro`
- `Gemini-Pro-Vision`
- `Claude-3, 2` (claude-3-haiku-2024030, claude-3-sonnet-2024022, ...)
- ...
And those from open-source community:
- `Mixtral 8x7B`
- `QWEN`, `QWEN-VL`
- `CogAgent`
- ...
And we will integrate and support more foundation models to support digital agent in the future, stay tuned.
### How to use
```python
from mm_agents.agent import PromptAgent
agent = PromptAgent(
model="gpt-4-0125-preview",
observation_type="screenshot",
)
agent.reset()
# say we have a instruction and observation
instruction = "Please help me to find the nearest restaurant."
obs = {"screenshot": "path/to/observation.jpg"}
response, actions = agent.predict(
instruction,
obs
)
```
### Observation Space and Action Space
We currently support the following observation spaces:
- `a11y_tree`: the a11y tree of the current screen
- `screenshot`: a screenshot of the current screen
- `screenshot_a11y_tree`: a screenshot of the current screen with a11y tree
- `som`: the set-of-mark trick on the current screen, with a table metadata
And the following action spaces:
- `pyautogui`: valid python code with `pyauotgui` code valid
- `computer_13`: a set of enumerated actions designed by us
To use feed an observation into the agent, you have to keep the obs variable as a dict with the corresponding information:
```python
obs = {
"screenshot": "path/to/observation.jpg",
"a11y_tree": "" # [a11y_tree data]
}
response, actions = agent.predict(
instruction,
obs
)
```
## Efficient Agents, Q* Agents, and more
Stay tuned for more updates.

View File

@@ -180,6 +180,7 @@ def trim_accessibility_tree(linearized_accessibility_tree, max_tokens):
linearized_accessibility_tree += "[...]\n"
return linearized_accessibility_tree
class PromptAgent:
def __init__(
self,
@@ -572,22 +573,10 @@ class PromptAgent:
logger.debug("CLAUDE MESSAGE: %s", repr(claude_messages))
# headers = {
# "x-api-key": os.environ["ANTHROPIC_API_KEY"],
# "anthropic-version": "2023-06-01",
# "content-type": "application/json"
# }
# headers = {
# "Accept": "application / json",
# "Authorization": "Bearer " + os.environ["ANTHROPIC_API_KEY"],
# "User-Agent": "Apifox/1.0.0 (https://apifox.com)",
# "Content-Type": "application/json"
# }
headers = {
"Authorization": os.environ["ANTHROPIC_API_KEY"],
"Content-Type": "application/json"
"x-api-key": os.environ["ANTHROPIC_API_KEY"],
"anthropic-version": "2023-06-01",
"content-type": "application/json"
}
payload = {
@@ -598,28 +587,21 @@ class PromptAgent:
"top_p": top_p
}
max_attempts = 20
attempt = 0
while attempt < max_attempts:
# response = requests.post("https://api.aigcbest.top/v1/chat/completions", headers=headers, json=payload)
response = requests.post("https://token.cluade-chat.top/v1/chat/completions", headers=headers,
json=payload)
if response.status_code == 200:
result = response.json()['choices'][0]['message']['content']
break
else:
logger.error(f"Failed to call LLM: {response.text}")
time.sleep(10)
attempt += 1
response = requests.post(
"https://api.anthropic.com/v1/messages",
headers=headers,
json=payload
)
if response.status_code != 200:
logger.error("Failed to call LLM: " + response.text)
time.sleep(5)
return ""
else:
print("Exceeded maximum attempts to call LLM.")
result = ""
return result
return response.json()['content'][0]['text']
elif self.model.startswith("mistral"):
print("Call mistral")
messages = payload["messages"]
max_tokens = payload["max_tokens"]
top_p = payload["top_p"]
@@ -652,7 +634,9 @@ class PromptAgent:
response = client.chat.completions.create(
messages=mistral_messages,
model=self.model,
max_tokens=max_tokens
max_tokens=max_tokens,
top_p=top_p,
temperature=temperature
)
break
except:
@@ -670,7 +654,6 @@ class PromptAgent:
elif self.model.startswith("THUDM"):
# THUDM/cogagent-chat-hf
print("Call CogAgent")
messages = payload["messages"]
max_tokens = payload["max_tokens"]
top_p = payload["top_p"]
@@ -703,7 +686,9 @@ class PromptAgent:
payload = {
"model": self.model,
"max_tokens": max_tokens,
"messages": cog_messages
"messages": cog_messages,
"temperature": temperature,
"top_p": top_p
}
base_url = "http://127.0.0.1:8000"
@@ -717,7 +702,6 @@ class PromptAgent:
print("Failed to call LLM: ", response.status_code)
return ""
elif self.model.startswith("gemini"):
def encoded_img_to_pil_img(data_str):
base64_str = data_str.replace("data:image/png;base64,", "")
@@ -802,7 +786,8 @@ class PromptAgent:
messages = payload["messages"]
max_tokens = payload["max_tokens"]
top_p = payload["top_p"]
temperature = payload["temperature"]
if payload["temperature"]:
logger.warning("Qwen model does not support temperature parameter, it will be ignored.")
qwen_messages = []
@@ -821,7 +806,9 @@ class PromptAgent:
response = dashscope.MultiModalConversation.call(
model='qwen-vl-plus',
messages=messages, # todo: add the hyperparameters
messages=messages,
max_length=max_tokens,
top_p=top_p,
)
# The response status_code is HTTPStatus.OK indicate success,
# otherwise indicate request is failed, you can get error code

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB