shipa.yml

The shipa.yaml is a special file located at the root of the application source code. The name of the file may be either shipa.yaml or shipa.yml.

The file is used to describe certain aspects of the application being deployed. The file describes information about deployment hooks and deployment time health checks.

Deployment Hooks

Shipa provides different deployment hooks options, such as restart:before, restart:after and build. Deployment hooks allow developers to run commands during different stages of the application deployment.

Below is an example of how to declare hooks in the shipa.yaml file:

hooks:
  restart:
    before:
      - python manage.py local_file
    after:
      - python manage.py clear_cache
  build:
    - python manage.py collectstatic --noinput
    - python manage.py compress

Currently, Shipa supports the following hooks:

  • restart:before: executes commands before the unit is restarted. Commands listed in this hook run once per unit. For instance, if there is an application with two units and the shipa.yaml file listed above, the command python manage.py local_file would run two times, once per unit.
  • restart:after: works like before-each, but runs after restarting a unit.
  • build: executes commands during deploy, when the image is being generated.

Healthcheck

Developers can declare a health check in the shipa.yaml file. This health check is called during the deployment process. Shipa confirms this health check is passing before continuing with it.

If Shipa fails to run the health check successfully, it aborts the deployment before switching the router to point to the new units. Therefore, the application is never unresponsive. Developers can configure the maximum time to wait for the application to respond with docker:healthcheck:max-time config.

Developers can configure a health check in the YAML file using the following example:

healthcheck:
  path: /healthcheck
  scheme: http
  method: GET
  status: 200
  headers:
    Host: test.com
    X-Custom-Header: xxx
  match: .*OKAY.*
  allowed_failures: 0
  use_in_router: true
  router_body: content
  • healthcheck:path: defines which path to call in the application. This path is called for each unit. It is the only mandatory field. If not set, the health check is ignored
  • healthcheck:scheme: defines which scheme to use. The defaults is http.
  • healthcheck:method: defines the method used to make the http request. The default is GET.
  • healthcheck:status: is the expected response code for the request. The defaults to 200. Kubernetes Provisioner ignores this field, which it always expects to be between 200 and 400.
  • healthcheck:headers: defines optional additional header names that can be used for the request. Header names must be capitalized.
  • healthcheck:match: is a regular expression to be matched against the request body. If not set, the body won’t be read and only the status code is checked. This regular expression uses Go syntax and runs with a matching \n (s flag).
  • healthcheck:allowed_failures: specifies the number of allowed failures before healthcheck considers the application is unhealthy. The defaults is 0.
  • healthcheck:use_in_router: defines whether this healthcheck path should also be registered in the router. Note: Ensure that the healthcheck is consistent to prevent the router from disabling units. The default is false. When an application has no explicit healthcheck, or use_in_router is false, a default healthcheck is configured.
  • healthcheck:router_body: is the body passed to the router when use_in_router is true.
  • healthcheck:timeout_seconds: is the timeout for each healthcheck call in seconds. The default is 60 seconds.
  • healthcheck:interval_seconds: is exclusive to Kubernetes Provisioner. It is the interval in seconds between each active healthcheck call if use_in_router is set to true. The default is 10 seconds.
  • healthcheck:force_restart: is exclusive to Kubernetes Provisioner. It determines whether the unit should be restarted after allowed_failures encounters consecutive healthcheck failures. (Sets the liveness probe in the Pod.

Kubernetes Configuration

If the application is running on a Kubernetes-provisioned framework, Developers can set specific configurations for Kubernetes. These configurations are ignored if the application is running on a Shipa provisioner.

Developers can configure which ports are exposed to each process of the application. Here is a complete example:

kubernetes:
  processes:
    web:
      ports:
       - name: web 
         protocol: TCP
         target_port: 5000
         port: 8080
       - name: socket-port
         protocol: TCP
         port: 4000
    worker:
      ports: []

Each exposed port can be configured for each process using the port's key:

  • kubernetes:processes::ports:name: is a descriptive name for the port. This field is optional.
  • kubernetes:processes::ports:protocol: defines the port protocol. The accepted values are TCP (default) and UDP.
  • kubernetes:processes::ports:target_port: is the port that the process is listening on. If omitted, the port value is used.
  • kubernetes:processes::ports:port: is the port that will be exposed on a Kubernetes service. If omitted, the target_port value is used.

If both port and target_port are omitted in a port config, the deployment fails.

Developers can set a process to expose no ports with an empty field, like worker above.

The configuration for multiple ports still has a couple of limitations:

  • Healthcheck is set to use the first configured port in each process
  • Only the first port of the web process (or the only process, if there is only one) is exposed in the router - but Developers can access the other ports from other applications in the same cluster, using Kubernetes DNS records, like *appname-processname.namespace.svc.cluster.local

Security

Shipa supports secret injection into the application using the HashiCorp Vault product.

Shipa.yaml has a security section where you can define Vault annotations for secret injection.

security:
    vault:
        annotations:
            vault.hashicorp.com/agent-inject: true
            vault.hashicorp.com/role: "internal-app"
            vault.hashicorp.com/agent-inject-secret-config: "internal/data/database/config"
            vault.hashicorp.com/agent-inject-template-config: |
                {{- with secret "internal/data/database/config" -}}
                    postgresql://{{ .Data.data.username }}:{{ .Data.data.password }}@postgres:5432/wizard
                {{- end -}}

Did this page help you?