Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removing the OpenShift dependency #100

Closed
desdrury opened this issue Nov 1, 2017 · 14 comments
Closed

Removing the OpenShift dependency #100

desdrury opened this issue Nov 1, 2017 · 14 comments
Labels
0-kubernetes Vanilla kubernetes support Lagoon3.0

Comments

@desdrury
Copy link

desdrury commented Nov 1, 2017

Hi,

I've been looking through Lagoon to try and identify the proprietary OpenShift parts. Those parts that stop it from running on a standard Kubernetes cluster. So far I have identified the use of the following OpenShift specific resources:

  • Routes
  • BuildConfigs
  • Builds
  • ImageStreams
  • ImageStreamTags
  • DeploymentConfigs

None of these resources offer anything that cannot be replaced by a more open implementation. But I wanted to confirm that this is the full set of OpenShift specific resources?

I'll be adding much more commentary to this issue over the coming days, as I work through some ideas. Hopefully this can be a great discussion about possible solutions to giving Lagoon access to the widest possible audience of users :-)

@Schnitzel
Copy link
Contributor

I guess we need to differentiate here:

  1. Lagoon needs a home, aka it needs to run in Docker containers and these need to be orchestrated somewhere. Currently this is possible in a local system via docker-compose or in a production System via OpenShift.
  2. Lagoon can deploy Docker Images into a Docker Orchestration System and configure this System. This is currently OpenShift. But in the future we want to support multiple Systems: Kubernetes, Docker Swarm, etc. To Support that we need a pluggable system which is discussed here: allow the ability of having plugable services.  #98

I assume your Issue refers to the first one, aka running Lagoon itself somewhere else than OpenShift?

The fun comes now: Lagoon deploys itself inside Lagoon, so if we are fixing Nr 2 we automatically will also fix Nr 1. Inception FTW :)

So I'm keen to work on Nr 2 (allow Lagoon to also deploy into Kubernetes) and with that we will pretty much automatically be able to run Lagoon in Kubernetes. But I would not call that "remove OpenShift" but rather "allow Lagoon to also run in Kubernetes" as OpenShift is (at least for us at amazee.io) an integral part of Lagoon.

If you want to work on Nr 1 before on Nr 2 that's also perfectly fine, I just honestly wouldn't know how :)

@desdrury
Copy link
Author

desdrury commented Nov 1, 2017

Thanks for the comments Michael :-)

You are correct that I am keen to work on no 1, allow the home of Lagoon to be Kubernetes. However, I am still working on understanding all of the elements of Lagoon. So I would definitely seek your guidance about the best approach to remove the OpenShift dependancies from Lagoon.

I agree that "remove OpenShift" might sound a bit harsh, but if we do that then Lagoon can still run on OpenShift and then also any of the KaaS (Kubernetes as a Service) cloud offerings or the multitude of Kubernetes distributions. Which would dramatically increase the size of the market for Lagoon and lower the barriers to entry.

@Schnitzel
Copy link
Contributor

So the easiest to remove the hard dependency of Lagoon to OpenShift and just make it a possible choice for Lagoon to run on OpenShift or Kubernetes would be to find a replacement for the parts that you mentioned. The biggest question for me is the Builds part:
Builds in OpenShift are nothing else then just some special Job (in the End again Pods) that run in privileged mode. Beside of that they provide some easier APIs for the specific task of Building an Image.
What is the best practice in Kubernetes to Build Docker Images within Kubernetes? Are there Plugins for it to be used or is it just to create a new k8s Job Object with special permissions?

@desdrury
Copy link
Author

desdrury commented Nov 2, 2017

There's actually many choices for the build parts. It really boils down to what is your favourite job management solution, i.e. Jenkins, etc. Usually what I do is deploy Jenkins from this Helm Chart.

https://github.com/kubernetes/charts/tree/master/stable/jenkins

It is highly configurable and the community has created a comprehensive solution. Out of the box it is configured to spin up Jenkins Agents as needed. However, the default Jenkins Agent cannot do Docker in Docker (dind) to build Docker images. So I create my own version of the Jenkins Agent with this Dockerfile.

FROM jenkinsci/jnlp-slave:2.62

USER root

ENV DOCKER_VERSION=1.12.6
ENV KUBE_VERSION=1.7.2
ENV HELM_VERSION=2.6.2

WORKDIR /tmp

# Docker
RUN curl -O https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz && \
    tar xzvf docker-${DOCKER_VERSION}.tgz && \
    mv docker/* /usr/local/bin/ && \
    rm -R docker/ && \
    rm docker-${DOCKER_VERSION}.tgz

# Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v${KUBE_VERSION}/bin/linux/amd64/kubectl && \
    chmod +x ./kubectl && \
    mv kubectl /usr/local/bin/

# Helm
RUN curl -O https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
    tar xzvf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
    mv linux-amd64/helm /usr/local/bin/ && \
    rm -R linux-amd64/ && \
    rm helm-v${HELM_VERSION}-linux-amd64.tar.gz

I then modify the Kubernetes cloud configuration for the agent to mount in the /var/run/docker.sock so that the docker command can build images. It's basically the same mechanism as OpenShift is also using.

I see that Jenkins is already part of Lagoon? So if that is the case then it would be a really simple process to create some Jenkinsfile scripts that did the build and deploy instead of the OpenShift resources.

@Schnitzel
Copy link
Contributor

Actually we just remove Jenkins from Lagoon (see the develop branch) because in the end we just need it to do 1 single thing: Run one Docker Container with dind:

https://github.com/amazeeio/lagoon/blob/9b33aef5c82dcf907623dec3e035a98c95270c73/services/openshiftdeploy/src/index.js#L119-L151

Running Jenkins for that is way too crazy and overengineered. So we decided to ditch Jenkins and just use the OpenShift Builders instead.

Is there any best practice to run dind directly in Kubernetes?

@desdrury
Copy link
Author

desdrury commented Nov 2, 2017

Regarding replacing the Routes resource this could be easily achieved by any number of Ingress Controllers that are available. I've used a few but come to rely on the standard Nginx Ingress Controller as it is so configurable. Especially for security. Here's the Helm Chart.

https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress

I also use this Helm Chart to automate the provisioning and rotation of TLS Certs from Lets Encrypt.

https://github.com/kubernetes/charts/tree/master/stable/kube-lego

And if you want to automate the setup of Route53 DNS entries based on Ingress resources you can use this Helm Chart.

https://github.com/kubernetes/charts/tree/master/stable/external-dns

@desdrury
Copy link
Author

desdrury commented Nov 2, 2017

For dind without a Job Manager you could do as you suggested and just use a Job resource. The container would have the tools similar to what I defined earlier, in the Dockerfile, and a script to do the build based upon a passed in argument.

@desdrury
Copy link
Author

desdrury commented Nov 2, 2017

You could also use CronJob for automated builds each night if that made sense in the solution.

@desdrury
Copy link
Author

desdrury commented Nov 2, 2017

You may also find this interesting. Released by Microsoft last week and by the same people that wrote Helm and Draft.

http://brigade.sh

@thom8
Copy link
Contributor

thom8 commented Nov 2, 2017

Wonder if we could leverage OpenFaaS as the build component, which includes a simple API to trigger builds.

Recently converted a bunch of Jenkins tasks to serverless functions and no longer need to maintain jenkins or a server.

@thom8
Copy link
Contributor

thom8 commented Dec 30, 2017

http://kompose.io/ is another option which currently supports both Kubernetes & Openshift.

This can also be extended to support other formats -- https://github.com/kubernetes/kompose/blob/master/docs/architecture.md#transformer

@sylus
Copy link

sylus commented Apr 19, 2018

I'd love to help with this as the blocker for me using Lagoon is the OpenShift dependency.

Would be neat to get this working with ACS Engine.

@Schnitzel Schnitzel changed the title Removing the OpenShift dependancy Removing the OpenShift dependency Oct 19, 2018
@Schnitzel
Copy link
Contributor

we got another client asking specifically for this, the idea is to use AWS EKS (managed Kubernetes by AWS) with RDS and ElasticCache, so we don't need to worry about these for now.
Also the customer brings their own Docker Image Registry, so that is also handled.
Therefore what is left:

  • create kubernetesbuilddeploy which runs Kubernetes Jobs instead of Builds
  • create kubernetesbuilddeploymonitor which monitors the Kubernetes Job instead of the Build
  • create an kubernetes-build-deploy-dind which uses kubectl instead of oc
    • Add templates for nginx/php, varnish, cli with Deployments instead of DeploymentConfigs
    • Update Routes with Ingress
    • Find a way to parse the templates as kubectl does not support any templating system (maybe use Helm here already??)
  • replace use of oc by kubectl within ssh service (should actually be possible to use that for openshift as well, as we only need kubectl commands anyway)
  • teach Lagoon API that there is now also a Kubernetes Type and not only an OpenShift Type

@tobybellwood
Copy link
Member

With the release of Lagoon2 imminent, closing this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0-kubernetes Vanilla kubernetes support Lagoon3.0
Projects
None yet
Development

No branches or pull requests

5 participants