Everyone is curious about automation. Everyone. AI agents are the next wave of automated systems. You can give them goals. They can think, take actions, and reflect on what they have done so far to achieve that goal. AI agents don’t sleep. All that limits them is how much time and power you give them.
LLM-powered AI agents have two major components:
- The LLM, which is the ‘brain’ of the agent, used for thinking.
- The tools, the digital ‘equipment’ they use to take actions.
Common tools in modern agents: search engines, webpage visitors, deep thinking, digit multiplication, and sentiment judgment. At its core, a tool is a function. Literally, any function you can create, you can describe and give to an agent so that they can use it. You can even place API calls inside your tools, so information/actions from any app can be retrieved or executed.
Now, let’s build an agent that helps us do research about computers — and then edits the research to write tweets.
Let’s begin by writing our necessary imports:
from smolagents import tool, CodeAgent, HfApiModel
from huggingface_hub import login, InferenceClient
from apikey import huggingface_token
import requests
# Login to Hugging Face
login(huggingface_token)
# Initialize the Perplexity API key
import os
from apikey import pplx_api_key
os.environ["PERPLEXITY_API_KEY"] = pplx_api_key
if not pplx_api_key:
print("⚠️ Warning: You need to set your actual API key to use this notebook.")
else:
print("✅ API key is set.")
# Set our model to the default Hugging Face API model
model = HfApiModel()
Now we are going to write the system prompt. This uses the ReAct framework — giving the model a goal, reasoning about it, taking actions with tools, and reflecting on the result.
Here’s an example:
agent_system_prompt = """
You are an AI agent that helps users do research on computer programming. You have the gifts of writing and logic.
Use a Thought, Action, Observation loop to do research and improve the writing of the research. Use the search tool one time only. Make sure to use each writing tool at least once. Be logical and eloquent.
Think first, observe all the paths in front of you. Then act.
After you act, observe the results. Always strive for the best writing possible.
You have the following tools at your disposal:
Tool Name: improve_text
Description: Improves and edits the given text for clarity and style
Args:
text: The text to improve and edit
Returns:
The improved and edited text
Tool Name: search_web
Description: Search the web for information using a more advanced search engine.
Args:
query: The query to search the web for
Returns:
The search results
Now GO!
Here is the user's question: {user_input}
When you are finished and have an excellent response, please return your response with the prefix <Final Answer: >
"""
Here, the prompt tells the agent how to operate, what tools it has available, and how to format its final answer.
The <Final Answer>
formatting is necessary for the tool to produce the right kind of string so that the LLM can parse its response correctly.
Tools
The first tool we will define is the edit the text tool, which leverages Hugging Face’s current standard AI model:
@tool
def improve_text(text: str) -> str:
"""Improves and edits the given text for clarity and style
Args:
text: The text to improve and edit
Returns:
The improved and edited text
"""
try:
response = client.chat.completions.create(
model="qwen/Qwen2.5-Coder-32B-Instruct",
messages=[
{"role": "system", "content": "You are an expert editor specializing in technical writing."},
{"role": "user", "content": f"Improve this text while maintaining its technical accuracy: {text}"}
],
max_tokens=500
)
return response.choices[0].message.content
except Exception as e:
return f"Error improving text: {str(e)}"
Next, we’ll create an advanced search tool that uses the Perplexity API to search the web for great answers:
@tool
def search_web(query: str) -> str:
"""Searches the web using Perplexity's API.
Args:
query: The query to search the web for
Returns:
The search results
"""
try:
headers = {
"Authorization": f"Bearer {pplx_api_key}",
"Content-Type": "application/json"
}
payload = {
"model": "sonar-pro",
"messages": [
{"role": "system", "content": "You are a helpful AI assistant that focuses on computer science and provides accurate and comprehensive search results."},
{"role": "user", "content": query}
]
}
response = requests.post(
"https://api.perplexity.ai/chat/completions",
headers=headers,
json=payload
)
if response.status_code == 200:
return response.json()["choices"][0]["message"]["content"]
else:
return f"Error: API returned status code {response.status_code}"
except Exception as e:
return f"Error searching web: {str(e)}"
Remember: you can use AI agents for customer support, editing text, computing geometric formulas, and more. All you have to do is think about the tools you define for your agent.
Tools can contain any function you can imagine. They are the fundamental way through which AI operates in the world. Additionally, with CodeAgents on Hugging Face, you can give agents additional authorized imports like pandas
and numpy
.
Here’s the last bit of code we need to create the CodeAgent, wire it to the tools and brain, and define behavior:
pythonCopyEditdef research_agent(user_input: str, history: list) -> str:
try:
agent = CodeAgent(
tools=[improve_text, search_web],
model=model,
additional_authorized_imports=["pandas", "numpy"],
max_steps=5,
verbosity_level=2
)
formatted_prompt = agent_system_prompt.format(user_input=user_input)
result = agent.run(formatted_prompt)
if isinstance(result, tuple):
response = result[0]
else:
response = str(result)
history.append((user_input, response))
return response
except Exception as e:
error_message = f"An error occurred: {str(e)}"
history.append((user_input, error_message))
return error_message
# Create and launch the Gradio interface
import gradio as gr
chat_interface = gr.ChatInterface(
fn=research_agent,
title="Programming Research Assistant",
description="Ask me anything about computer programming and I'll research it for you.",
examples=[
"What are the fundamental concepts of object-oriented programming?",
"Explain the importance of version control in software development"
]
)
if __name__ == "__main__":
chat_interface.launch(share=False)
You’ll notice that we imported Gradio.
Gradio is an abstraction layer over JavaScript/HTML/CSS that allows us to easily demo our agent.
Here, we used Gradio’s ChatInterface component.
Conclusion
We learned how to build a simple AI agent that searches the web using the Perplexity API and refines the text it retrieves.
Here are some additional tools you might add:
- Save key insights to a SQL database.
- Review the writing with the persona of a world-expert computer researcher at Stanford.
- Enable more dynamic agent self-reflection loops.
What are you building with agents today? Super curious about it!