Pangeanic Architecture Block Diagrams
The solution is implemented on a fully distributed architecture with different modules and covers 3 main areas:
SaaS - Service Area
Data Management
Machine Learning
SaaS - Service Area
This area includes the technical resources to be able to serve users' requests and includes as main blocks:
Interfaces: a web interface and a RESTful API used by users, robots or programmatically to access the solution functionality
The HUB: receiving and rerouting the users requests. The physical implementation varies from a single micro server to a Kubernetes-managed platform with load-balancing and upscaling in order to serve up to thousands of requests per second
The File Processor: in charge of extracting text data from formatted documents and rebuilding them with minimal format loss
Machine Learning Area
This area includes the neural networks and all the required resources to create, train and run them as text processors and NLP units:
NE Farm: a highly scalable farm of dockerized servers escalating to serve processing requests
Neural Trainer: GPU dedicated servers offered to customers to adapt the language models to their specific needs
A Model & Dockers Repo to store base and evolved models
Data Management Area
Data is the fuel of machine learning. Any neural network might require up to 100 million examples to be trained, and those examples have to be acquired, processed, cleaned, packaged…
We at Pangeanic use a Data Lake as Corpora Repository, and multiple NLP processes interface with it to add data units, clean, categorize, evaluate, etc.
PECAT is a specific tool to allow professionals and non-professionals –crowd– input to improve, filter, select and categorize the data
Flow
Users access the functionalities with a variety of client interfaces (described later) such as the PGB, web applications, CAT tools or programmatically with a RESTful API for integrations
The Production Access Server manages user requests and orchestrates the rest of modules. It requires a standard SQL database to store the required data to fulfil the requests
The engines, either local (managed by the organization on their own premises or on their own cloud) or operated by Pangeanic with a SaaS model, will perform the actual language processing
A File Processor is in charge of dealing with converting files and documents when this feature is installed
An Online Trainer Module is in charge of evolving the models according to the user preferences. This is integrated in the engine package when the online learning option is installed
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article