diff --git a/Gemfile b/Gemfile index d2cba4c..2974a52 100644 --- a/Gemfile +++ b/Gemfile @@ -6,5 +6,5 @@ gemspec group :test, :development do gem 'pry-byebug', '~> 3.10', '>= 3.10.1' - gem 'rubocop', '~> 1.58' + gem 'rubocop', '~> 1.65' end diff --git a/Gemfile.lock b/Gemfile.lock index ce967a4..bdb4cf4 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -1,8 +1,10 @@ PATH remote: . specs: - ollama-ai (1.0.1) - faraday (~> 2.9) + ollama-ai (1.3.0) + faraday (~> 2.10) + faraday-typhoeus (~> 1.1) + typhoeus (~> 1.4, >= 1.4.1) GEM remote: https://rubygems.org/ @@ -10,17 +12,25 @@ GEM ast (2.4.2) byebug (11.1.3) coderay (1.1.3) - faraday (2.9.0) + ethon (0.16.0) + ffi (>= 1.15.0) + faraday (2.10.0) faraday-net_http (>= 2.0, < 3.2) + logger faraday-net_http (3.1.0) net-http - json (2.7.1) + faraday-typhoeus (1.1.0) + faraday (~> 2.0) + typhoeus (~> 1.4) + ffi (1.17.0) + json (2.7.2) language_server-protocol (3.17.0.3) - method_source (1.0.0) + logger (1.6.0) + method_source (1.1.0) net-http (0.4.1) uri - parallel (1.24.0) - parser (3.3.0.3) + parallel (1.25.1) + parser (3.3.4.0) ast (~> 2.4.1) racc pry (0.14.2) @@ -29,34 +39,39 @@ GEM pry-byebug (3.10.1) byebug (~> 11.0) pry (>= 0.13, < 0.15) - racc (1.7.3) + racc (1.8.0) rainbow (3.1.1) - regexp_parser (2.9.0) - rexml (3.2.6) - rubocop (1.59.0) + regexp_parser (2.9.2) + rexml (3.3.2) + strscan + rubocop (1.65.0) json (~> 2.3) language_server-protocol (>= 3.17.0) parallel (~> 1.10) - parser (>= 3.2.2.4) + parser (>= 3.3.0.2) rainbow (>= 2.2.2, < 4.0) - regexp_parser (>= 1.8, < 3.0) + regexp_parser (>= 2.4, < 3.0) rexml (>= 3.2.5, < 4.0) - rubocop-ast (>= 1.30.0, < 2.0) + rubocop-ast (>= 1.31.1, < 2.0) ruby-progressbar (~> 1.7) unicode-display_width (>= 2.4.0, < 3.0) - rubocop-ast (1.30.0) - parser (>= 3.2.1.0) + rubocop-ast (1.31.3) + parser (>= 3.3.1.0) ruby-progressbar (1.13.0) + strscan (3.1.0) + typhoeus (1.4.1) + ethon (>= 0.9.0) unicode-display_width (2.5.0) uri (0.13.0) PLATFORMS + arm64-darwin-23 x86_64-linux DEPENDENCIES ollama-ai! pry-byebug (~> 3.10, >= 3.10.1) - rubocop (~> 1.58) + rubocop (~> 1.65) BUNDLED WITH 2.4.22 diff --git a/README.md b/README.md index d78451c..72d1d7d 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ A Ruby gem for interacting with [Ollama](https://ollama.ai)'s API that allows yo ## TL;DR and Quick Start ```ruby -gem 'ollama-ai', '~> 1.0.1' +gem 'ollama-ai', '~> 1.3.0' ``` ```ruby @@ -62,40 +62,42 @@ Result: - [TL;DR and Quick Start](#tldr-and-quick-start) - [Index](#index) - [Setup](#setup) - - [Installing](#installing) + - [Installing](#installing) - [Usage](#usage) - - [Client](#client) - - [Methods](#methods) - - [generate: Generate a completion](#generate-generate-a-completion) - - [Without Streaming Events](#without-streaming-events) - - [Receiving Stream Events](#receiving-stream-events) - - [chat: Generate a chat completion](#chat-generate-a-chat-completion) - - [Back-and-Forth Conversations](#back-and-forth-conversations) - - [embeddings: Generate Embeddings](#embeddings-generate-embeddings) - - [Models](#models) - - [create: Create a Model](#create-create-a-model) - - [tags: List Local Models](#tags-list-local-models) - - [show: Show Model Information](#show-show-model-information) - - [copy: Copy a Model](#copy-copy-a-model) - - [delete: Delete a Model](#delete-delete-a-model) - - [pull: Pull a Model](#pull-pull-a-model) - - [push: Push a Model](#push-push-a-model) - - [Modes](#modes) - - [Text](#text) - - [Image](#image) - - [Streaming and Server-Sent Events (SSE)](#streaming-and-server-sent-events-sse) - - [Server-Sent Events (SSE) Hang](#server-sent-events-sse-hang) - - [New Functionalities and APIs](#new-functionalities-and-apis) - - [Request Options](#request-options) - - [Timeout](#timeout) - - [Error Handling](#error-handling) - - [Rescuing](#rescuing) - - [For Short](#for-short) - - [Errors](#errors) + - [Client](#client) + - [Bearer Authentication](#bearer-authentication) + - [Methods](#methods) + - [generate: Generate a completion](#generate-generate-a-completion) + - [Without Streaming Events](#without-streaming-events) + - [Receiving Stream Events](#receiving-stream-events) + - [chat: Generate a chat completion](#chat-generate-a-chat-completion) + - [Back-and-Forth Conversations](#back-and-forth-conversations) + - [embeddings: Generate Embeddings](#embeddings-generate-embeddings) + - [Models](#models) + - [create: Create a Model](#create-create-a-model) + - [tags: List Local Models](#tags-list-local-models) + - [show: Show Model Information](#show-show-model-information) + - [copy: Copy a Model](#copy-copy-a-model) + - [delete: Delete a Model](#delete-delete-a-model) + - [pull: Pull a Model](#pull-pull-a-model) + - [push: Push a Model](#push-push-a-model) + - [Modes](#modes) + - [Text](#text) + - [Image](#image) + - [Streaming and Server-Sent Events (SSE)](#streaming-and-server-sent-events-sse) + - [Server-Sent Events (SSE) Hang](#server-sent-events-sse-hang) + - [New Functionalities and APIs](#new-functionalities-and-apis) + - [Request Options](#request-options) + - [Adapter](#adapter) + - [Timeout](#timeout) + - [Error Handling](#error-handling) + - [Rescuing](#rescuing) + - [For Short](#for-short) + - [Errors](#errors) - [Development](#development) - - [Purpose](#purpose) - - [Publish to RubyGems](#publish-to-rubygems) - - [Updating the README](#updating-the-readme) + - [Purpose](#purpose) + - [Publish to RubyGems](#publish-to-rubygems) + - [Updating the README](#updating-the-readme) - [Resources and References](#resources-and-references) - [Disclaimer](#disclaimer) @@ -104,11 +106,11 @@ Result: ### Installing ```sh -gem install ollama-ai -v 1.0.1 +gem install ollama-ai -v 1.3.0 ``` ```sh -gem 'ollama-ai', '~> 1.0.1' +gem 'ollama-ai', '~> 1.3.0' ``` ## Usage @@ -125,6 +127,34 @@ client = Ollama.new( ) ``` +#### Bearer Authentication + +```ruby +require 'ollama-ai' + +client = Ollama.new( + credentials: { + address: 'http://localhost:11434', + bearer_token: 'eyJhbG...Qssw5c' + }, + options: { server_sent_events: true } +) +``` + +Remember that hardcoding your credentials in code is unsafe. It's preferable to use environment variables: + +```ruby +require 'ollama-ai' + +client = Ollama.new( + credentials: { + address: 'http://localhost:11434', + bearer_token: ENV['OLLAMA_BEARER_TOKEN'] + }, + options: { server_sent_events: true } +) +``` + ### Methods ```ruby @@ -767,6 +797,21 @@ result = client.request( ### Request Options +#### Adapter + +The gem uses [Faraday](https://github.com/lostisland/faraday) with the [Typhoeus](https://github.com/typhoeus/typhoeus) adapter by default. + +You can use a different adapter if you want: + +```ruby +require 'faraday/net_http' + +client = Ollama.new( + credentials: { address: 'http://localhost:11434' }, + options: { connection: { adapter: :net_http } } +) +``` + #### Timeout You can set the maximum number of seconds to wait for the request to complete with the `timeout` option: @@ -855,6 +900,7 @@ bundle rubocop -A bundle exec ruby spec/tasks/run-client.rb +bundle exec ruby spec/tasks/test-encoding.rb ``` ### Purpose @@ -868,7 +914,7 @@ gem build ollama-ai.gemspec gem signin -gem push ollama-ai-1.0.1.gem +gem push ollama-ai-1.3.0.gem ``` ### Updating the README diff --git a/components/errors.rb b/components/errors.rb index 70d8773..a791489 100644 --- a/components/errors.rb +++ b/components/errors.rb @@ -4,7 +4,7 @@ module Ollama module Errors class OllamaError < StandardError def initialize(message = nil) - super(message) + super end end diff --git a/controllers/client.rb b/controllers/client.rb index 842babb..496142e 100644 --- a/controllers/client.rb +++ b/controllers/client.rb @@ -1,6 +1,7 @@ # frozen_string_literal: true require 'faraday' +require 'faraday/typhoeus' require 'json' require_relative '../components/errors' @@ -12,6 +13,8 @@ class Client ALLOWED_REQUEST_OPTIONS = %i[timeout open_timeout read_timeout write_timeout].freeze + DEFAULT_FARADAY_ADAPTER = :typhoeus + def initialize(config) @server_sent_events = config.dig(:options, :server_sent_events) @@ -21,6 +24,8 @@ def initialize(config) "#{config[:credentials][:address].to_s.sub(%r{/$}, '')}/" end + @bearer_token = config[:credentials][:bearer_token] + @request_options = config.dig(:options, :connection, :request) @request_options = if @request_options.is_a?(Hash) @@ -30,6 +35,8 @@ def initialize(config) else {} end + + @faraday_adapter = config.dig(:options, :connection, :adapter) || DEFAULT_FARADAY_ADAPTER end def generate(payload, server_sent_events: nil, &callback) @@ -87,10 +94,12 @@ def request(path, payload = nil, server_sent_events: nil, request_method: 'POST' method_to_call = request_method.to_s.strip.downcase.to_sym - partial_json = '' + partial_json = String.new.force_encoding('UTF-8') response = Faraday.new(request: @request_options) do |faraday| + faraday.adapter @faraday_adapter faraday.response :raise_error + faraday.request :authorization, 'Bearer', @bearer_token if @bearer_token end.send(method_to_call) do |request| request.url url request.headers['Content-Type'] = 'application/json' @@ -104,7 +113,13 @@ def request(path, payload = nil, server_sent_events: nil, request_method: 'POST' raise_error.on_complete(env.merge(body: chunk)) end - partial_json += chunk + utf8_chunk = chunk.force_encoding('UTF-8') + + partial_json += if utf8_chunk.valid_encoding? + utf8_chunk + else + utf8_chunk.encode('UTF-8', invalid: :replace, undef: :replace) + end parsed_json = safe_parse_json(partial_json) @@ -115,7 +130,7 @@ def request(path, payload = nil, server_sent_events: nil, request_method: 'POST' results << result - partial_json = '' + partial_json = String.new.force_encoding('UTF-8') end end end diff --git a/ollama-ai.gemspec b/ollama-ai.gemspec index 94d9862..082546f 100644 --- a/ollama-ai.gemspec +++ b/ollama-ai.gemspec @@ -29,7 +29,9 @@ Gem::Specification.new do |spec| spec.require_paths = ['ports/dsl'] - spec.add_dependency 'faraday', '~> 2.9' + spec.add_dependency 'faraday', '~> 2.10' + spec.add_dependency 'faraday-typhoeus', '~> 1.1' + spec.add_dependency 'typhoeus', '~> 1.4', '>= 1.4.1' spec.metadata['rubygems_mfa_required'] = 'true' end diff --git a/spec/tasks/run-client.rb b/spec/tasks/run-client.rb index 8c5cb23..4f03a04 100644 --- a/spec/tasks/run-client.rb +++ b/spec/tasks/run-client.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -require 'ollama-ai' +require_relative '../../ports/dsl/ollama-ai' begin client = Ollama.new( diff --git a/spec/tasks/test-encoding.rb b/spec/tasks/test-encoding.rb new file mode 100644 index 0000000..b47843c --- /dev/null +++ b/spec/tasks/test-encoding.rb @@ -0,0 +1,10 @@ +# frozen_string_literal: true + +require_relative '../../ports/dsl/ollama-ai' + +client = Ollama.new( + credentials: { address: 'http://localhost:11434' }, + options: { server_sent_events: true } +) + +puts client.show({ name: 'yi:latest' })[0]['license'] diff --git a/static/gem.rb b/static/gem.rb index 6dcbc3c..27c8696 100644 --- a/static/gem.rb +++ b/static/gem.rb @@ -3,7 +3,7 @@ module Ollama GEM = { name: 'ollama-ai', - version: '1.0.1', + version: '1.3.0', author: 'gbaptista', summary: 'Interact with Ollama API to run open source AI models locally.', description: "A Ruby gem for interacting with Ollama's API that allows you to run open source AI LLMs (Large Language Models) locally.", diff --git a/tasks/generate-readme.clj b/tasks/generate-readme.clj index 46067af..4bd573b 100644 --- a/tasks/generate-readme.clj +++ b/tasks/generate-readme.clj @@ -23,7 +23,7 @@ (remove nil?))] (->> processed-lines (map (fn [{:keys [level title link]}] - (str (apply str (repeat (* 4 (- level 2)) " ")) + (str (apply str (repeat (* 2 (- level 2)) " ")) "- [" title "](#" diff --git a/template.md b/template.md index 413e2bf..814a17a 100644 --- a/template.md +++ b/template.md @@ -9,7 +9,7 @@ A Ruby gem for interacting with [Ollama](https://ollama.ai)'s API that allows yo ## TL;DR and Quick Start ```ruby -gem 'ollama-ai', '~> 1.0.1' +gem 'ollama-ai', '~> 1.3.0' ``` ```ruby @@ -66,11 +66,11 @@ Result: ### Installing ```sh -gem install ollama-ai -v 1.0.1 +gem install ollama-ai -v 1.3.0 ``` ```sh -gem 'ollama-ai', '~> 1.0.1' +gem 'ollama-ai', '~> 1.3.0' ``` ## Usage @@ -87,6 +87,34 @@ client = Ollama.new( ) ``` +#### Bearer Authentication + +```ruby +require 'ollama-ai' + +client = Ollama.new( + credentials: { + address: 'http://localhost:11434', + bearer_token: 'eyJhbG...Qssw5c' + }, + options: { server_sent_events: true } +) +``` + +Remember that hardcoding your credentials in code is unsafe. It's preferable to use environment variables: + +```ruby +require 'ollama-ai' + +client = Ollama.new( + credentials: { + address: 'http://localhost:11434', + bearer_token: ENV['OLLAMA_BEARER_TOKEN'] + }, + options: { server_sent_events: true } +) +``` + ### Methods ```ruby @@ -729,6 +757,21 @@ result = client.request( ### Request Options +#### Adapter + +The gem uses [Faraday](https://github.com/lostisland/faraday) with the [Typhoeus](https://github.com/typhoeus/typhoeus) adapter by default. + +You can use a different adapter if you want: + +```ruby +require 'faraday/net_http' + +client = Ollama.new( + credentials: { address: 'http://localhost:11434' }, + options: { connection: { adapter: :net_http } } +) +``` + #### Timeout You can set the maximum number of seconds to wait for the request to complete with the `timeout` option: @@ -817,6 +860,7 @@ bundle rubocop -A bundle exec ruby spec/tasks/run-client.rb +bundle exec ruby spec/tasks/test-encoding.rb ``` ### Purpose @@ -830,7 +874,7 @@ gem build ollama-ai.gemspec gem signin -gem push ollama-ai-1.0.1.gem +gem push ollama-ai-1.3.0.gem ``` ### Updating the README