Until now, when creating an AI enabled application, we usually needed an OpenAI (or other provider) API key to communicate with the interfaces. In such a case, we can use very powerful LLMs in exchange of some API tokens.
However, Google recently announced that we can easily run its smallest model, the Gemini Nano, on our computers easily. We can do this in the Chrome browser. This solution is no stranger to Google, as it is already available on some mobile devices with Android operating systems.
For now, this is not available in the main version of Chrome, only in the Dev and Canary versions, so it is still only an early stage built-in API. However, we developers like to try new things, which is why I came up with the idea to build something with this new API.
As soon as this new browser API came out, Vercel already updated their AI SDK to allow access to this API as well.
With these tools, nothing is simpler than building an application that does not need any API key, and can run 100% locally (yes, even without internet) and has some cool AI features.
What are we going to build?
Let's make a small mini blog application that generates hashtags matching the content of the article when you press a button. This is exactly a task for the Gemini Nano model, which it can easily cope with.
Technology stack
Next.js
Vercel AI SDK
Chrome AI
TailwindCSS
Prerequisites
Before we get started, it is necessary to download the Chrome Dev version. Once you're done with that, start it and do some settings:
Enter this "address" in the address bar, then press enter:
chrome://flags
Here, look for the “Enables optimization guide on device” setting, then set it to Enabled BypassPerfRequirement. Then set the “Prompt API for Gemini Nano” setting to Enabled. In the end you have to end up with this configuration:
Then restart Chrome and type this into the address bar:
chrome://components
Let's check here whether the "Optimization Guide On Device Model" component is up-to-date, if not, update it. This is the setting that contains our model. Chrome will probably have to download this the first time you use it, it will take a few minutes.
If everything is fine you should see something like this:
If we did everything right, we can already play with our local AI model on this page. Let's also try turning off the Internet. Doesn't it work so well and quickly? Just think how good it could be to use this in the future on platforms where it's important that the data does not leave our local machine and/or fast response time is also a key factor.
Implementation
And then the exciting part begins. Let's start making the app!
To begin with, issue the well-known command in the terminal of our machine:
npx create-next-app@latest
As usual, we name our application and check the default settings. You can enter anything for the name of the application, for me it was:
local-first-article-summary
Now onto building it... Let's make the smallest building blocks first. And these are the componenst and the markdown files of the blog post.
We will have an ArticleCard.tsx
file that contains the card view of each article and expects the appropriate input parameters:
import Link from 'next/link'
import { Article } from '@/types'
const ArticleCard = ({ article }: { article: Article }) => {
return (
<div className="group rounded-lg border border-transparent px-5 py-4 transition-colors hover:border-gray-300 hover:bg-gray-100 hover:dark:border-neutral-700 hover:dark:bg-neutral-800/30">
<h2 className="mb-3 text-2xl font-semibold">
{article.title} <span className="inline-block transition-transform group-hover:translate-x-1 motion-reduce:transform-none">-></span>
</h2>
<p className="m-0 max-w-[30ch] text-sm opacity-50">
{article.excerpt}
</p>
<Link href={`/articles/${article.id}`}>
<span className="text-blue-500">Read more</span>
</Link>
</div>
)
}
export default ArticleCard
Then comes the soul of our entire application, which is AISummary.tsx
. I'll write down the code, and then let's see what exactly is going on here. So the code is:
'use client'
import { streamText } from "ai";
import { chromeai } from "chrome-ai";
import { SparklesIcon } from "@heroicons/react/16/solid";
import { FormEvent, useState } from "react";
interface AISummaryProps {
articleContent: string;
}
const AISummary = ({ articleContent }: AISummaryProps) => {
const [hashtags, setHashtags] = useState<string>("");
const [loading, setLoading] = useState<boolean>(false);
const handleSubmit = async (e: FormEvent<HTMLFormElement>) => {
e.preventDefault();
let accumulatedHashtags = "";
setLoading(true);
try {
const { textStream } = await streamText({
model: chromeai("text", {}),
prompt: `This is the content of an article: ${articleContent} Summarize in three basic hashtag with emojis at the end of each hashtag! Please make sure you only write the hashtags and emojis down, nothing more!`,
});
for await (const textPart of textStream) {
accumulatedHashtags += textPart;
setHashtags(accumulatedHashtags);
}
} catch (e) {
console.error(e);
} finally {
setLoading(false);
}
}
return (
<>
<form onSubmit={handleSubmit}>
<button type="submit"
className="mt-10 flex items-center px-6 py-2 bg-blue-600 text-white rounded hover:bg-blue-600">
<SparklesIcon className="h-5 w-5 mr-2"/>
{loading ? "Generating hashtags..." : "Generate some hashtags!"}
</button>
</form>
<p className="mt-4 text-gray-700 text-lg font-bold">
{hashtags}
</p>
</>
);
}
export default AISummary;
We have a form here. Within this, we have a button, which will do the AI hashtag generation when clicked. Here we also display the hashtags below. It is important that there are interactive parts and we use the useState hook, so this will be a client component. This is necessary in order to display the hashtag parts, generated for us by Chrome AI, in a nice stream. Within the handleSubmit
function, we use the streamText
function from the Vercel AI SDK, which interacts with our "chromeai"
local model. Here we need a good prompt and, of course, the ability to iterate through the streamed text chunks and continuously fill our hashtag state with it. Pretty simple and easy to read, right?
However, there are a few dependencies that we need to install in order for this to work. These are the AI interface tools and heroIcons. Let's also install these:
npm i ai
npm i @ai-sdk/openai
npm i chromes
If we done with this, then we can create the content, precisely in .md format, since our application will process posts in markdown format. I created two examples, but the markdown of any article can be inserted here. Let's make an app/content
folder and inside it there should be an article1.md
and an article2.md
. As I mentioned, we can fill it with any content we like, I filled it with these and with this formatting:
article1.md
---
title: "The Rise of Quantum Computing"
excerpt: "Quantum computing is set to revolutionize technology. This article explores its potential and challenges."
---
# The Rise of Quantum Computing
Quantum computing is one of the most exciting advancements in technology today. Unlike classical computers, which use bits to process information in binary form (0s and 1s), quantum computers use quantum bits or qubits. These qubits can exist in multiple states at once, thanks to the principles of quantum superposition and entanglement.
## What Makes Quantum Computing Different?
At the heart of quantum computing is the qubit. While a classical bit can be either 0 or 1, a qubit can be both at the same time. This property is known as superposition. Furthermore, qubits can be entangled, meaning the state of one qubit can depend on the state of another, no matter the distance between them. This entanglement allows quantum computers to perform complex calculations at unprecedented speeds.
### Potential Applications
The potential applications of quantum computing are vast and varied:
1. **Cryptography**: Quantum computers could crack current cryptographic codes with ease, leading to new methods of securing data.
2. **Medicine**: They can model molecular structures to accelerate drug discovery.
3. **Artificial Intelligence**: Quantum computing can significantly enhance machine learning algorithms, making AI more powerful and efficient.
4. **Optimization Problems**: From logistics to financial modeling, quantum computers can solve optimization problems more efficiently than classical computers.
## Challenges Ahead
Despite its promise, quantum computing faces significant challenges:
1. **Error Rates**: Quantum computers are highly susceptible to errors due to decoherence and quantum noise.
2. **Scalability**: Building and maintaining a large number of qubits in a coherent state is incredibly difficult.
3. **Cost**: Quantum computers are currently extremely expensive to build and operate.
## The Future of Quantum Computing
Researchers and tech giants like IBM, Google, and Microsoft are investing heavily in quantum computing. IBM has already made quantum computers available through the cloud, allowing researchers and developers to experiment with this technology.
### Conclusion
Quantum computing holds the promise of revolutionizing many fields. While there are substantial challenges to overcome, the potential benefits make it one of the most exciting areas of research and development today. As technology progresses, we can expect to see quantum computing move from theoretical research to practical applications, reshaping our world in the process.
article2.md
---
title: "The Future of Artificial Intelligence"
excerpt: "Artificial Intelligence (AI) continues to evolve rapidly. This article explores the future trends and impacts of AI technology."
---
# The Future of Artificial Intelligence
Artificial Intelligence (AI) has seen tremendous growth and evolution over the past decade. As AI technology continues to advance, its impact on various industries and aspects of daily life is becoming increasingly profound.
## Current State of AI
Today, AI is used in a wide range of applications, from natural language processing and computer vision to autonomous vehicles and healthcare diagnostics. AI algorithms can process vast amounts of data at incredible speeds, making them invaluable tools for data analysis and decision-making.
### Key Trends in AI
Several key trends are shaping the future of AI:
1. **Ethical AI**: As AI systems become more integrated into society, the importance of ethical AI practices is growing. This includes ensuring fairness, transparency, and accountability in AI algorithms.
2. **AI and Automation**: AI-driven automation is transforming industries by streamlining processes and increasing efficiency. From manufacturing to customer service, automation is reducing the need for human intervention in repetitive tasks.
3. **AI in Healthcare**: AI is revolutionizing healthcare by enabling early diagnosis of diseases, personalized treatment plans, and efficient management of healthcare resources.
4. **Edge AI**: With the rise of IoT devices, there is a growing trend towards processing AI algorithms on the edge, closer to where data is generated. This reduces latency and enhances real-time decision-making capabilities.
## Challenges and Considerations
While the future of AI is promising, several challenges need to be addressed:
1. **Data Privacy**: AI systems rely on vast amounts of data, raising concerns about data privacy and security. Ensuring that AI systems protect user data is crucial.
2. **Bias in AI**: AI algorithms can inadvertently perpetuate biases present in training data. Addressing bias in AI is essential to ensure fair and equitable outcomes.
3. **Job Displacement**: As AI-driven automation increases, there is a potential for job displacement in various sectors. Strategies for workforce reskilling and transition are needed.
## The Road Ahead
Looking ahead, AI technology is poised to continue its rapid evolution. Researchers are exploring new frontiers in AI, such as general artificial intelligence, which aims to create systems with human-like cognitive abilities. Additionally, collaborations between academia, industry, and governments will play a vital role in advancing AI technology responsibly.
### Conclusion
The future of artificial intelligence is both exciting and challenging. As AI technology advances, it holds the potential to revolutionize industries, improve quality of life, and address some of the world's most pressing issues. However, realizing this potential requires careful consideration of ethical, social, and economic implications.
By fostering responsible AI development and addressing key challenges, we can ensure that the future of AI benefits all of humanity.
Now we need a helper file that converts our markdown files into a structure that React can understand. To do this, create a lib folder and a markdown.ts file inside it:
import fs from 'fs'
import path from 'path'
import matter from 'gray-matter'
import { remark } from 'remark'
import html from 'remark-html'
import remarkGfm from 'remark-gfm'
import { Article } from '@/types'
const contentDirectory = path.join(process.cwd(), 'content')
export async function getArticleData(filename: string): Promise<Article> {
const fullPath = path.join(contentDirectory, filename)
const fileContents = fs.readFileSync(fullPath, 'utf8')
const matterResult = matter(fileContents)
const processedContent = await remark()
.use(html)
.use(remarkGfm)
.process(matterResult.content)
const contentHtml = processedContent.toString()
return {
id: filename.replace(/\.md$/, ''),
title: matterResult.data.title,
excerpt: matterResult.data.excerpt,
contentHtml,
}
}
export async function getAllArticles(): Promise<Article[]> {
const filenames = fs.readdirSync(contentDirectory)
return await Promise.all(
filenames.map(async (filename) => {
return await getArticleData(filename)
})
)
}
This file is self-explanatory, it does the conversion for us with the help of some libs, which we can also install:
npm i gray-matter
npm i remark
npm i remark-gfm
npm i remark-html
If you are already in the terminal, then install the Tailwind typography add-on:
npm i @tailwindcss/typography
We will need this so that when we display the HTML file converted from markdown, TailwindCSS does not apply the default formatting to it. We also need to indicate the use of this add-on in our tailwind.config.ts
file:
import type { Config } from "tailwindcss";
const config: Config = {
content: [
"./pages/**/*.{js,ts,jsx,tsx,mdx}",
"./components/**/*.{js,ts,jsx,tsx,mdx}",
"./app/**/*.{js,ts,jsx,tsx,mdx}",
],
theme: {
extend: {
backgroundImage: {
"gradient-radial": "radial-gradient(var(--tw-gradient-stops))",
"gradient-conic":
"conic-gradient(from 180deg at 50% 50%, var(--tw-gradient-stops))",
},
},
},
plugins: [
require('@tailwindcss/typography'),
],
};
export default config;
In addition, we will also need a type definition file.
types/index.ts
export interface Article {
id: string
title: string
excerpt: string
contentHtml: string
}
And then all you have to do is put the building blocks together. Change app/page.tsx
to:
import ArticleCard from '../components/ArticleCard'
import { getAllArticles } from '@/lib/markdown'
import { Article } from "@/types";
export default async function Home() {
const articles: Article[] = await getAllArticles()
return (
<main className="flex min-h-screen flex-col items-center justify-between p-24">
<div className="grid gap-4 lg:grid-cols-2 lg:w-full lg:max-w-5xl">
{articles.map((article) => (
<ArticleCard key={article.id} article={article} />
))}
</div>
</main>
)
}
Here you can see that we iterate through the articles, which we query from the markdown files and pass their data to the previously created ArticleCard
component.
We also need a detail page where we display the entire article. This is already referenced on the article card, you just need to create the route. Create the app/articles/[id]/page.tsx
file:
import { getArticleData } from '@/lib/markdown';
import { notFound } from 'next/navigation';
import { Article } from '@/types';
import AISummary from "@/components/AISummary";
const ArticleDetail = async ({ params }: { params: { id: string } }) => {
const article: Article = await getArticleData(`${params.id}.md`);
if (!article) {
return notFound();
}
return (
<main className="flex min-h-screen flex-col items-center justify-center p-24">
<div className="max-w-none w-full lg:max-w-3xl">
<div className="prose prose-lg" dangerouslySetInnerHTML={{__html: article.contentHtml}}/>
<AISummary articleContent={article.contentHtml}/>
</div>
</main>
);
}
export default ArticleDetail;
Here we will use our wonderful helper function from markdown.ts and display it. The AISummary button will also play a role there.
This is how our project came about. Let's try it. Let's start it with this command:
npm run dev
If we did everything right, the two articles will be visible in a list and if we click on them, the interface will appear where we can view the text of the article and generate the AI-generated hashtags by pressing the appropriate button.
For the Quantum computing article, we received the following, for example:
And all this locally? Cool, right? :)
It is important to note here that this model will not be able to give us detailed and accurate answers in many cases, and I have often experienced that it "gets stuck" during generation and the text generation does not continue. These will surely be improved in the future with the development of the model.
I hope this little article was useful for you. You can find the source code on Github. Also, if you have any questions, feel free to contact me on X.
Happy coding! :)
Top comments (0)