Frameworks are a logical definition of controls and policies. Framework policies are applied to applications created and deployed using a specific framework.
Frameworks can enforce policies across RBAC, network policies, security scanning, and more.
Frameworks can be bound to different clusters or cloud nodes and enforce different policies based on their configuration.
A cluster is a named group of Kubernetes nodes that belong to a specific Kubernetes cluster. Shipa API has a scheduler algorithm that distributes applications intelligently across a cluster of nodes.
A Shipa node is a physical or virtual machine with Docker installed.
Shipa nodes can be hosted both, on a cloud provider or on-premises infrastructure.
A managed Shipa node is a node created and managed by Shipa.
Using Shipa's cloud provider native integration, Shipa manages the created nodes, performe self-healing, auto-scale and others.
An unmanaged node is a node created manually, and just registered with Shipa.
When using unmanaged nodes, Shipa is not able to manage these as it does with managed nodes. The management responsibilities are then transferred to the user who actually created and added the node to Shipa.
Within Shipa, an application consists of:
- The application source code
- An operating system dependencies list
- A language-level dependencies list
- Instructions on how to run the application
Within Shipa, applications have a name, a unique address, a platform, associated development teams, a repository, and a set of units.
For Shipa, a unit is a container.
A unit has everything an application needs to run; the fetched operational system and language level dependencies, the application’s source code, the language runtime, and the application’s processes defined in the Procfile.
A platform is a well-defined pack with installed dependencies for a language or framework that a group of applications will need. A platform can also be a container template (Docker image).
Platforms are easily extendable and managed by Shipa. Every application runs on top of a platform.
Shipa provisioners are responsible for creating and scheduling units for applications and containers. Currently, Shipa supports its own internal provisioner for Linux nodes and Kubernetes.
Provisioners are also responsible for knowing which nodes are available to create units, register new nodes, and remove old nodes.
Provisioners are associated with frameworks. Shipa uses frameworks to find out which provisioner is responsible for each application. A single Shipa installation can manage different frameworks with different provisioners at the same time.
Shipa's provisioner store metadata of existing Linux nodes and containers on each node and track images as they are created on each node. To accomplish this, Shipa talks directly to the Docker API on each Linux node. The Docker API must be allowed to receive connections from the Shipa API using HTTP or HTTPS.
Shipa relies on its internal BusyBody service to monitor containers on each node and report back which containers are unavailable or have had their address changed by Docker restarting them. The Shipa provisioner is then responsible for rescheduling those containers on new nodes.
There is no need to register a cluster to use the Shipa provisioner. With the Docker API running, you can add new nodes to Shipa, and Shipa can use them through the Shipa frameworks.
When units are scheduled on nodes, those application containers receive high availability prioritization. Shipa creates each new container on the node with the fewest containers from the application. If there are multiple nodes with no containers from the application being scheduled, Shipa creates new containers on nodes with different metadata from those that already exist.
You can register a Kubernetes cluster in Shipa that points to the Kubernetes API server. The Shipa Kubernetes provisioner uses Kubernetes itself to manage its nodes and containers.
Scheduling is controlled exclusively by Kubernetes for each application/process, and Shipa creates a Deployment controller. Changes to the application like adding and removing units are executed by updating the Deployment with rolling update configured using the Kubernetes API. Node containers are created using the DaemonSets.
A Service controller is created for every Deployment, allowing for direct communication between services without the need to go through a Shipa router.
You can scale your Kubernetes cluster in the background as usual, and Shipa will automatically identify the newly added or removed nodes
Updated 8 days ago