Multi-Hop OpenVPN task
An automated OpenVPN API is required, given API has to receive a Debian Server or a Centos Server as input and automatize the entire process
Here are the requirements for libraries:
Asynchronous, FastAPI, multiprocessing, OpenSSL
Here are the logical requirements for functions:
[Connection]
As stated above, the API has to receive a Debian Server or a CentOS Server as input from the creator and automatize the entire process, this is the first function, ask for the server and connect to it.
If no server is provided, reach to the VMs API or the Dedicated Server API to deploy a new VPS/Dedicated Server instance (add a placeholder in your code for future implementation)
If the server requires private keys for ssh connection, the API must handle this case too.
[OpenVPN management]
[1] By the time we have successfully connected, we call the create OpenVPN server function, which will automatically create the OpenVPN server configuration stated by the visitor, if no config is stated, set the default one that works without visitor's arguments
[2] With a new instance of OpenVPN up and running, create the functions for user management: Create/add new user, delete user
[Single & Multi-Hop Management]
[1] With the instance up and running, the instance either becomes part of the multi-hop setup or remains a single VPN server.
[2] When connecting to the multi-hop chain of VPN servers, the client MUST use a SINGLE config file, that will connect to the FIRST server in the chain, and that server will redirect the packets to the NEXT one until reaching the final server.
A multi-hop chain can contain infinite servers, we are looking at the possibility to add as many hops as we want, we have done tests in the past with PentaVPN servers, here is a short-explanation of such PentaVPN setup:
[Customer] => OpenVPN single config file => (Connecting... to the first VPN server in chain) => Connected to Vienna_01_Server => (Fowarding... packets to the next server in chain:) => Moscow_08_Server => (Fowarding... packets to the next server in chain:) => Amsterdam_19_Server => (Fowarding... packets to the next server in chain:) => Beirut_923_Server => (Fowarding... packets to the next server in chain:) => Berlin_84_Server => [login to view URL] (EXIT)
(When Forwarding packets to the next server in chain, do NOT connect to the next server in chain by means of using openvpn client config files between eachserver, that adds additional openvpn tunnels on top of each tunnel which will slower the connectivity, what we want is forwarding of packets.
A single openvpn config file to be used for the client to connect to the multi-hop chain of VPN servers, that single file contains the first server in chain, and nothing else.
[3] When adding a new openvpn instance to an existent chain and the required position is neither FIRST or LAST, but ANYWHERE in between these, grab the required position: (after server X, before server Z), and do the necessary config replacements so that the chain can continue working without much downtime.
[4] Take down single or multiple instances from the chain or take down the whole chain -> Maintenance mode.
Now, the OpenVPN instance is up and running, the instance can be configured to either remain as a single hop or a multi-hop, it will become part of bi-directional business models, for now, we would like to implement the following business models (sql tables to keep track):
[SSVP] - Sariff Shared VPN Pool - If the instance is set as Shared OpenVPN, the instance becomes part of the Sariff Shared VPNs Pool (SSVP), which are shared VPN instances to be used by multiple customers via Sariff.
[SPVP] - Sariff Private VPNs Pool - If is set as Private OpenVPN, the instance becomes part of Sariff Private VPNs Pool (SPVP), which are VPNs dedicated to a single customer, not shared with others, via Sariff.
^ Remember, all of the above can be either Single Hop instances & Multi-Hop instances, both shared and private, shared when other customers share the single instances or the multi-hop instances, and private when the customers use for themselves the single instances or the multi-hop instances.
[Task management system & notification]
Tasks are generated by the API itself but by the Controller API, which is an API that will reach to this API and send him data to initialize the task:
task_id: "uuidv5"
task_action: "create_account_on_multihop_chain" / "create_account_on_single_hop" / "create_account_on_shared_instance" etc
Almost every request is a task, every task has a status, and every status needs to set an update to be read by the upstream Controller API.
We need a recycling mechanism in place to assist with tasks and failed tasks.
[Transit Guardian - Chain of Trust]
All API requests must ask for the chain of trust, that means, when creating a user, the API becomes a ROOT CA that generates certificates for the customers that ask the API to be registered.
With the API as ROOT CA, a certificate is released, given certificate is required to perform all management operations, without the chain of trust, there is no access to management, neither identification. Registration happens without any of these of course, but identification will require chain of trust. Automatically generate the certificate for the visitor so that we can attach it to his account without him noticing or worrying about technical requirements to attach the certificate to allow for requests, this way, we protect both the customer and the API.
[SQLite database management]
A database with all its tables (which are up to you to create, we will let you dynamically and freely manage all this data at your will without forcing you how to do it), we just have a requirement for the output:
--------------------------------------------------------------------------------------------------------------------
class APIResponse(BaseModel):
success: bool
data: Optional[Any] = None
error: Optional[str] = None
meta: Dict[str, Any] = {}
class APIResponseHandler:
@staticmethod
def success(data: Any = None, status_code: int = 200, **kwargs) -> Dict[str, Any]:
end_time = [login to view URL]()
start_time = [login to view URL]('start_time', end_time)
duration = round(end_time - start_time, 3)
return {
"content": APIResponse(
success=True,
data=data,
meta={
"http_code": status_code,
"timestamp": end_time,
"duration": f"{duration} seconds",
**kwargs
}
).dict(),
"status_code": status_code
}
@staticmethod
def error(message: str, status_code: int = 400, **kwargs) -> Dict[str, Any]:
end_time = [login to view URL]()
start_time = [login to view URL]('start_time', end_time)
duration = round(end_time - start_time, 3)
return {
"content": APIResponse(
success=False,
error=message,
meta={
"http_code": status_code,
"timestamp": end_time,
"duration": f"{duration} seconds",
**kwargs
}
).dict(),
"status_code": status_code
}
class StandardJSONResponse(JSONResponse):
def __init__(self, content: Any, status_code: int = 200, **kwargs):
super().__init__(content=content, status_code=status_code, **kwargs)
app = FastAPI()
app.router.default_response_class = StandardJSONResponse
@app.exception_handler(StarletteHTTPException)
async def http_exception_handler(request, exc):
return StandardJSONResponse(**[login to view URL](message=str([login to view URL]), status_code=exc.status_code))
@app.exception_handler(RequestValidationError)
async def validation_exception_handler(request, exc):
return StandardJSONResponse(**[login to view URL](message=str(exc), status_code=422))
@app.exception_handler(Exception)
async def general_exception_handler(request, exc):
return StandardJSONResponse(**[login to view URL](message="An unexpected error occurred", status_code=500))
-------------------------------------------------------------------------------------------------------------------------