Python Script alteration & Open ai API
$30-250 AUD
K zaplacení v momentě doručení
I am looking for a Python developer to make alterations to my existing script and integrate it with the OpenAI ChatGPT API. The specific requirements for the project are as follows:
- Alteration of the input/output format in the Python script
- Integration of new functionalities as per the provided requirements and examples
- Optimization of the existing code for improved performance
Ideal skills and experience for this job include:
- Proficiency in Python programming
- Experience with OpenAI APIs, specifically ChatGPT
- Strong understanding of input/output formatting in Python scripts
- Ability to optimize code for improved performance
Here is the script from [login to view URL] - [login to view URL]
I want to bring the results for this into a text file instead sending it to the front end
from serpapi import GoogleSearch
import os
import requests
# import random
from fastapi import FastAPI
from dotenv import load_dotenv
from serpapi import GoogleSearch
from [login to view URL] import ThreadPoolExecutor
import asyncio
import newspaper
from bs4 import BeautifulSoup
from [login to view URL] import stopwords
from [login to view URL] import word_tokenize
from [login to view URL] import bigrams, trigrams
import nltk
from .[login to view URL] import add_post, save_links, get_post, get_all_post, remove_post, update_post
# Load env
load_dotenv()
SERPAPI_KEY = [login to view URL]('c0797abfd638908bf4cadfb3c2522a2865d2d39f33d0d627411dc858332f6329')
app = FastAPI()
# Setup Cors
origins = [
"http://localhost",
"http://localhost:3000",
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
# allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
[login to view URL]('punkt')
[login to view URL]('stopwords')
# API Routes
@[login to view URL]("/")
def read_root():
return {"Hello": "World"}
@[login to view URL]('/search/')
def search(search: dict):
# 1. Get google search API
googleSearch = GoogleSearch({
"api_key": SERPAPI_KEY,
"engine": "google",
"num": 20,
"q": search['botox melbourne'],
"hl": search['english'],
"gl": search['melbourne'],
"google_domain": search['[login to view URL]'],
# "location": location,
})
result = googleSearch.get_dict()
#2. Get google autocomplete API
autocompleteSearch = GoogleSearch({
"api_key": SERPAPI_KEY,
"engine": "google_autocomplete",
"q": search['botox melbourne']
})
autocompleteResult = autocompleteSearch.get_dict()
result['autocomplete'] = autocompleteResult['suggestions']
post = add_post(
title = search['keyword'],
search_query = {
"q": search['keyword'],
"hl": search['searchLang'],
"gl": search['searchLocation'],
"googleDomain": search['googleDomain'],
},
search_result = result
)
return {
"post_id": [login to view URL],
"search_result": result
}
@[login to view URL]('/links/')
def saveLinks(linkData: dict):
post = save_links(
post_id = linkData['postId'],
choosen_links = linkData['choosenLinks']
)
return post
@[login to view URL]('/posts')
def getPosts():
posts = get_all_post()
return posts
@[login to view URL]('/posts/{post_id}')
def getPost(post_id: str):
post = get_post(post_id)
return post
@[login to view URL]('/posts/{post_id}')
def updatePost(post_id: str, data: dict):
post = update_post(post_id, data)
return post
@[login to view URL]('/posts/{post_id}')
def deletePost(post_id: str):
post = remove_post(post_id)
return post
@[login to view URL]('/scrape/{post_id}')
async def scrape(post_id: str):
post = get_post(post_id)
# Return early if exists
if post.links_scrape_result:
return {
"status": "success",
"data": post.links_scrape_result
}
# Scrape links if new
links = post.choosen_links
lang = post.search_query['hl']
contentInfo = []
allKeywords = []
with ThreadPoolExecutor() as executor:
tasks = [asyncio.get_running_loop().run_in_executor(executor, _scrape_article, link, lang) for link in links]
results = await [login to view URL](*tasks)
for result in results:
if result:
[login to view URL](result)
[login to view URL](result['keywords'])
topKeywords = _getTopKeywords(allKeywords)
averageWords = sum([content['totalWords'] for content in contentInfo]) / len(contentInfo)
links_scrape_result = {
"contentInfo": contentInfo,
"topKeywords": topKeywords,
"averageWords": averageWords
}
update_post(post_id, {
"links_scrape_result": links_scrape_result
})
return {
"status": "success",
"data": links_scrape_result
}
# Currently lang get from hl query parameter
# There's no guarantee it match available language at nltk
# [login to view URL]
# Newspaper Config
# When using proxy
# config = [login to view URL]()
# [login to view URL] = {
# 'http': '[login to view URL]' # sample only
# }
# config.request_timeout = 20
def _scrape_article(link, lang):
# Add config=config in paramter if adding proxy/user_agents
# article = [login to view URL](link, keep_article_html=True, config=config)
article = [login to view URL](link, keep_article_html=True)
content = None
try:
[login to view URL]()
[login to view URL]()
content = [login to view URL]
content_html = article.article_html
content_title = [login to view URL]
# Current logic to detect if it's blocked
# is by check if text results is very short
if len(content) < 100:
print("Scraping might be blocked! In case expection block not catching up")
print('---- link ---- \n' + link)
# Scrape from golang endpoint
endpoint = "http://api-golang:8080/scrape?link=" + link
response = [login to view URL](endpoint)
if response.status_code == 200:
data = [login to view URL]()
content = data['content']
content_html = data['content_html']
else:
print(f"Error {response.status_code}: {[login to view URL]}")
soup = BeautifulSoup(content_html, '[login to view URL]')
headings = []
for heading in soup.find_all(['h1', 'h2', 'h3']):
[login to view URL]({
"text": [login to view URL],
"tag": [login to view URL]
})
except:
print("-----------")
print("Error scrape. Todo: run manual fetch ", link)
print("-----------")
return None
if content == None:
return None
_lang = 'english'
if lang == 'id':
_lang = 'english'
# later add more language, mapping hl query -> nltk language
# [login to view URL]
text = [login to view URL]()
word_tokens = word_tokenize(text)
stop_words = set([login to view URL](_lang))
filtered_text = [word for word in word_tokens if [login to view URL]() and word not in stop_words]
single_freq_dist = [login to view URL](filtered_text)
bigram_freq_dist = [login to view URL](bigrams(filtered_text))
trigram_freq_dist = [login to view URL](trigrams(filtered_text))
# combine all frequency distribution
freq_dist = single_freq_dist + bigram_freq_dist + trigram_freq_dist
keywords = []
for word, frequency in freq_dist.most_common(10):
[login to view URL]({
"word": word,
"frequency": frequency
})
totalWords = len([login to view URL]())
return {
"link": link,
"totalWords": totalWords,
"keywords": keywords,
"title": content_title,
"headings": headings,
}
def _getTopKeywords(allKeywords):
MAX_KEYWORD = 15
topKeywords = {}
for word_dict in allKeywords:
# if word_dict['word'] is tuple, convert it to string
_word = word_dict['word']
if isinstance(_word, tuple):
_word = ' '.join(word_dict['word'])
word = [login to view URL]()
if word in topKeywords:
topKeywords[word] += word_dict['frequency']
else:
topKeywords[word] = word_dict['frequency']
# sort the dictionary by the frequency in descending order and get the first 10 items
sorted_topKeywords = sorted([login to view URL](), key=[login to view URL](1), reverse=True)[:MAX_KEYWORD]
return sorted_topKeywords
______
We need to use the [login to view URL] app on git hub
utilise the results of 2 aspects and store them in text file
And then be able to use those results in another python script.
Can the team help thanks
Identifikační číslo projektu: #37631624
O projektu
38 Freelnceři na tento projekt zveřejňují nabídky v průměru $168
Hello Greetings, After going through your project description, I feel confident and excited to work on this project for you. But I have some crucial things and queries to clear out. Can you please leave a message on Další
Hello. I read your requirement i will do that. Please come on chat we will discuss more about this. I will waiting your reply.
Hi there, I'm thrilled to apply for your Python Script alteration & Open ai API project. With 4-5 years of experience in GitHub, Python and SQL, I'm confident in my ability to bring valuable insights and expertise to Další
Hello sir, I hope you are good. I have read your job description, its doable job as per my experience and knowledge. I want to ask you few questions about job description. I am full stack developer having a good experi Další
Hi, I can help you to make alterations to your existing script and integrate it with the OpenAI ChatGPT API. message me for further discussion.
As a Python developer well-versed in SQL, I'm ready to level up your Python script to meet your requirements in integrating with the OpenAI ChatGPT API and altering the input/output format. My expertise in Python, Pand Další
===== AVAILABLE FOR IMMEDIATE WORKING ======= Hi! I can make alterations to my existing script and integrate it with the OpenAI ChatGPT API. I'm working(part-time) as a full-stack developer for E-Vision Software Ltd, Další
Hello there Vihang S., Good morning! I’ve carefully checked your requirements and really interested in this job. I’m full stack node.js developer working at large-scale apps as a lead developer with U.S. and European Další
Hello Vihang S., I’ve carefully checked your requirements and really interested in this job. I can complete your project on time and your will experience great satisfaction with me. I have rich experienced in SQL, Pyt Další
Hello Vihang S., I went through your project description and it seems like am great fit for this job. As an expert who have many years of experience on Python, SQL, GitHub Please come over chat and discuss your requ Další