VMware Code Stream: Builds as a Micro-Service

I’ve been exploring the concept of treating software builds as micro-services.  The reasons are below.

  1.  Portability:  Just do a docker run and you’ve got a clean environment to build your software!
  2. Scale : The whole master and agent type of CI infrastructure has some limitations as the master must track the state of all the agents and pick the right type of agents to build on.  It’s really just another layer to manage.
  3. Versioning :  Build components are defined in Docker files versioned in git.  If I need to build on version X of Java, I just pull it in to my build with docker run as opposed to installing it on a system and sorting out which system has the appropriate version at build time.  Try doing that simply with a classic CI server.

I’m going to show that you can deliver software using CI methodologies using a pipeline while excluding a traditional CI server.  I have authored and contributed to plugins for Jenkins, so you may find it odd that I’m challenging the position of a classic CI server.  However, there are simply opportunities to do things differently and perhaps better.

In this case I’ll show you how to build in a micro-service fashion using VMware’s Code Stream, Photon OS and Log Insight!

No Humans allowed!

The key to a consistent build is keeping people out of the gears.  Traditionally, log analysis has kept people logging into servers and introducing a risk of drift.  At best the logs were dumped onto something like Jenkins and developers would download the logs to their desktop, sifting them with notepad.

First things first, we must address the  build logs!

Log Insight is a great tool for centralizing logs and viewing their contents.   We’ll need to turn the Log Insight agent into a container.  Take a peak at my Log Insight agent on Docker Hub.  It’s based on the photon os image on docker hub.

Don’t Blink!  Well…Go ahead and blink it’s OK.

The next challenge is to treat the build steps as processes.  Containers are perfect for this.  A container returns an exit code of the process it’s running.  It will only exist for the duration of the process!  For a build I need at least two processes.  A git clone to download the code and a maven process to build the code.  Once again, I have both git and maven on docker hub to serve as examples.

Volumes!

Now that I have 3 images,  a git image, maven image and a log insight image,  I need a way to share the data between them.   A shared volume works perfectly for this.

Got the picture?  Now here is the frame.

vRCS_container_LI

Pipeline!

I use code stream to create the containers on Photon OS and manage the stage based on the exit status of each process run.  Note that the volume and Log Insight container persist until I have them cleaned up once I’m done with the build.

How do I trigger the pipeline?  With a webhook on the git server.  Every time a code commit occurs, the webhook tells the pipeline to build.  This is perfect for an agile environment that treats every commit as a potential release candidate.

As you can see below, my maven build failed.  That’s cool because I can now look at Log Insight and see my maven build log and surefire JUnit tests to figure out where the problem is.

Micro-ServiceBuildPipeline

Visibility!

Inside log insight I can now search for my project using tags created when I instantiated my Log Insight agent during the build.  In this case, my pipeline configures the log insight agent to create a “project” tag in Log Insight.  The project is named after the code project.  I can also add a “buildNumber” tag to search for a particular build.

LogInsight.png

I can then view the entries in their context, which gives me all the data I need to know why the build failed.

ViewInContext.png

Once the logs are in Log Insight I can get fancy with dashboards collecting metrics and also alerting if certain events are logged.  Pretty slick!

Want to learn more?

I’ll be speaking at VMworld 2016 on this topic.  I’m partnering with Ryan Kelly on “vRA, API, CI, Oh My!”   If you are attending VMworld, stop by and find me.  If you aren’t attending, I’ll post more information after VMworld.

 

Photon Controller .9 in a lab!

I’ve been playing around with Photon Controller since .8 and I’m impressed by it’s potential.  VMware has a strong cadence for Photon Controller as .9 dropped a little more than a month after .8.

I’m sharing my experiences with running Photon Controller in a lab.

First some info on the .9 release:

  • The OVA for the deployer is 3x smaller than .8 in size!
  • Support for Kube-up is included.   This required changes on the Kubernetes side as well.  These changes are currently in master on the Kubernetes project.
  • Docker Machine support!
  • Production ready Bosh CPI

Get the bits here!

Now for the magic!

My Photon lab is a group of nested ESXI hosts running on a single host with 8 cores, 96 GB of memory and a 256GB SSD.  I’m using 1 node out of my C6100 to run the lab.  There has been a drop in memory prices recently so I doubled the memory in my lab.

That being said, you can install on far less memory and here is how!

I run Photon Platform on nested ESX and the performance is exceptional for a lab.  I’m running a total of four nested hosts.  One host for management and three hosts as cloud nodes for workload.  My hosts are each set to 18GB of memory. However, you can  deploy Photon with K8S using less than 4GB per host (16GB total) if you really press.

  1. Install the Photon deployer on your physical host or laptop.  The deployer must  be network accessible to the ESX hosts where photon controller will be deployed.
  2. Create Nested ESX hosts for the Photon Controller management and cloud nodes.  Each host will require a minimum of 4GB of memory.  I personally run with 18GB due to my physical hosts memory size.
    • Get the nested ESX OVA by William Lam and install it on your ESX host.
      1. Ensure that promiscuous mode and forged transmits are enabled on the virtual switch on the physical ESX host.
      2. Ensure that MAC learning dvFilter is enabled on the nested hosts.
    • Create a shared VMDK.  Your hosts will all use this VMDK as their image datastore to share images across all the hosts.
      1. Ensure that the VMDK is created as thick eager zero. I’m utilizing less than 1GB in my lab with a K8S installation. Adjust the size if you plan on loading more images.
      2. Ensure that the VMDK is enabled for mult-write
    • Create a local datatstore for each host.
      1. The local datastores can be created as thin
      2. The local datastores will be used for running the VMs, so they should be a  larger.  I’m currently just under 10GB utilization for the local datastores.
      3. Note that you can add local datastores later if more storage is required.  Photon controller will simply begin utilizing the additional datastores.

3.  Modify the Photon Controller deployment file to minimize management memory.  I’ve included my YAML deployment file below.

hosts:
  – metadata:
      MANAGEMENT_VM_CPU_COUNT_OVERWRITE: 2
      MANAGEMENT_VM_MEMORY_MB_OVERWRITE: 4096
      MANAGEMENT_VM_DISK_GB_OVERWRITE: 80
      MANAGEMENT_DATASTORE: esx-pp1-local
      MANAGEMENT_PORTGROUP: VM Network
      MANAGEMENT_NETWORK_NETMASK: XX.XX.XX.XX
      MANAGEMENT_NETWORK_DNS_SERVER: XX.XX.XX.XX
      MANAGEMENT_NETWORK_GATEWAY: XX.XX.XX.XX
      MANAGEMENT_VM_IPS: XX.XX.XX.XX
    address_ranges: XX.XX.XX.XX
    username: root
    password: XXXXXXX
    usage_tags:
      – MGMT
  – address_ranges: XX.XX.XX.XX-XX.XX.XX.XX
    username: root
    password: password
    usage_tags:
      – CLOUD
deployment:
  resume_system: true
  image_datastores: cloud-store
  auth_enabled: false
  stats_enabled: false
  use_image_datastore_for_vms: true
  loadbalancer_enabled: true

Note that the following from the deployment file.

  •  I added the following to restrict the size of the management VM. Note the memory is 4GB, you could deploy with 2GB to further compress the size.

 MANAGEMENT_VM_CPU_COUNT_OVERWRITE: 2
 MANAGEMENT_VM_MEMORY_MB_OVERWRITE: 4096
 MANAGEMENT_VM_DISK_GB_OVERWRITE: 80

  • The management datastore is specified as the following.

 MANAGEMENT_DATASTORE: esx-pp1-local

Note this is a local disk on the host named as a management host. My config only uses 1 host for management.  If you have more than one host, then this datastore must be a shared datastore with all management hosts attached.

  • The shared image store to be connected to all hosts is specified as

image_datastores: cloud-store

Note that  “use_image_datastore_for_vms: true”  allows the image datastore to be used for deploying VMs too.  You can set this to false if you have other disks attached to be used for the VMs.

  •  The IP address range property sets the number of cloud hosts to be used. In this case I use an IP range that spans three IP addresses.

address_ranges: XX.XX.XX.XX-XX.XX.XX.XX

4.  Modify the flavors to minimize the K8, Mesos or Swarm deployments.  Please read my earlier post or William Lam’s blog on how to set the flavors.

 

Script the install with Photon CLI

Within my lab I have the install of Photon Controller scripted with  photon CLI. A new tenant that includes a K8S project is ready for use in under 10 minutes.  I’ve included the script below.  Note: WordPress treats certain characters like ” and — funny. You may have to fix these characters.

#!/bin/bash

# This script assumes a fresh Photon Controller deployment and is for demonstration purposes.
# The script will create a tenant, resource ticket, project and a K8S cluster

P2_DEPLOYER="http://XX.XX.XX.XX"
P2="http://XX.XX.XX.XX:28080"
TENANT="inkysea"
TICKET="k8s-ticket"
PROJECT="go-scale"

# Deploy Photon Controller
photon -n target set $P2_DEPLOYER
photon -n system deploy esxcloud-installation-export-config-CLI.yaml


photon -n target set $P2

photon -n tenant create $TENANT
photon tenant set $TENANT
photon -n resource-ticket create --name $TICKET \
    --limits "vm.cpu 24 COUNT, vm.memory 64 GB, vm 100 COUNT,ephemeral-disk.capacity 2000 GB,ephemeral-disk 12 COUNT"
photon -n project create --resource-ticket $TICKET --name $PROJECT \
    --limits "vm.cpu 8 COUNT, vm.memory 38 GB, vm 25 COUNT,ephemeral-disk.capacity 440 GB, ephemeral-disk 4 COUNT"
photon project set $PROJECT


# Deploy k8s
PHOTON_ID=`photon -n deployment list | awk '{if (NR!=1) {print}}'`
echo "Photon Controller Deployment ID is $PHOTON_ID"

IMG_ID=`photon -n image create photon-kubernetes-vm-disk1.vmdk -n photon-kubernetes-vm.vmdk -i EAGER `
#echo "Kubernetes Image ID is $IMG_ID"

photon -n deployment enable-cluster-type $PHOTON_ID -k KUBERNETES -i $IMG_ID

photon cluster create -n k8-cluster -k KUBERNETES --dns XX.XX.XX.XX --gateway XX.XX.XX.XX --netmask 255.255.255.0 \
    --master-ip XX.XX.XX.XX --container-network 10.2.0.0/16 --etcd1 XX.XX.XX.XX -s 1

 

 

Vote for my VMworld Sessions

I have three sessions proposed for VMworld and I need your help to get there!  If you enjoyed my blog posts on automation or the Jenkins Plugin for vRA, now’s your change to support me.

You can vote for my sessions even if you are not attending VMworld.

  1.  Login to vmworld.com .  Remember, you don’t have to attend VMworld, but you do have to create a login to the vmworld site to vote.
  2. Select the star by my session to count as a vote, like the screen shot below!VMworldVote.png

My sessions are below.  Click the links and please vote!

Going Cloud Native: Kube-up, getting swarm, we made a mesos! [8916]

  • Paul Gifford and I will both bring our beards and talk about how Photon Controller integrates with Kubernetes, Swarm and Mesos!

vRA, API, CI, Oh My!!! [7674]

  • Did you enjoy the vRA Jenkins Plugin?  Come listen to Ryan Kelly and I talk about DevOps best practices with vRA.  We’ll feature how Code Stream is used to manage the pipeline for the Jenkins vRA plugin project and other good stuff I am currently writing!

Putting Dev into DevOps with vRealize Automation [7673]

  • Looking for an intro to Agile development and how to integrate vRA with Agile?  Ryan Kelly and I have just the intro for you!

Less than 16GB? K8 Photon Controller? No Problem!

VMware recently released Photon Controller .8 as an open source release.  Many of you are wondering how to install Kubernetes, Mesos or Swarm on top of Photon Controller in a resource constrained lab.   It turns out it’s very easy to shrink the default installation images for Kubernetes, Mesos or Swarm.

I’ll show you how to setup Kubernetes on a 16GB host below.  We’ll start off with a pre-installed Photon Controller .8 host that is configured as both a management node and a cloud node.

To read about installing Photon Controller, I’ll direct you to William Lam’s blog.

Before getting started make sure you have Photon CLI and Kubectl installed.

Get your Tenants, Tickets and Projects!

Log into the Photon Controller and create a tenant.  You can do this with either the gui or the photon CLI.  For this blog I am creating them in the gui.

Open your browser to the Photon Controller IP address and browse to the Tenants section.  Hit the + button to create a new tenant.

add_tenant

When creating a tenant using the GUI, you will also be prompted to create a resource ticket.  Go big with the settings for a resource ticket, even if your host doesn’t have the resources to cover the ticket.  I use the following settings.

vCPU 16,memory 48,disks 4, VM count 50

newtenant

Note that you must declare at least 240GB storage and 4 disks to install k8, swarm or mesos.  But don’t worry, you don’t actually need that much.

You’ll now see your new tenant listed.  Select the upper right corner of the tenant for a drop down list and select “New Project”.

tenantdone

Next create a project, again go big with the resources but don’t exceed what is in your resource ticket.  Reminder that you definitely need to declare 240GB storage and at least 3 disks. A second reminder to not worry if your host doesn’t have 240GB storage or 3 disks.

newproject

When complete it should look similar to this.

completedTenantProject

Load your images

Photon Controller makes it super easy to setup Kubernetes.  The Photon Controller team has included a VMDK image for Kubernetes.  Download the Kubernetes VMDK and use your photon client to load the image into Photon Controller.

First set your target.  <IP> is the IP of your photon controller management server.

$ ./photon target set http://<IP>:28080
Using target ‘http://<IP>:28080
API target set to ‘http://<IP>:28080
Now create your image.
$./photon image create ./photon-kubernetes-vm-disk1.vmdk -n photon-kubernetes-vm.vmdk -i EAGER
Using target ‘http://<IP>:28080
Created image ‘photon-kubernetes-vm.vmdk’ ID: 3bc55f7c-5bf9-4c52-8d85-c9d3bac4f5d0

 

What’s your Flavor?

Photon Controller uses Flavors to define VM sizing.  The default sizing for K8 master, slave and etcd are meant for scale and are too large for a small lab.  We’ll make them smaller so the fit in your lab.

View the current flavors:

$ ./photon flavor list
Using target ‘http://<IP&gt;:28080’
ID                                    Name                                      Kind            Cost
15bb4c0a-277f-423f-b928-49dd5d3185a0  cluster-vm-disk                           ephemeral-disk  ephemeral-disk 1 COUNT
                                                                                                ephemeral-disk.flavor.cluster-vm-disk 1 COUNT
                                                                                                ephemeral-disk.cost 1 COUNT
3ad4cd9c-2602-4c68-93af-9bb3f64b287c  cluster-other-vm                          vm              vm 1 COUNT
                                                                                                vm.flavor.cluster-other-vm 1 COUNT
                                                                                                vm.cpu 1 COUNT
                                                                                                vm.memory 4 GB
                                                                                                vm.cost 1 COUNT
3f3924ec-8495-4bc2-ab7b-1375489a5a5e  cluster-master-vm                         vm              vm 1 COUNT
                                                                                                vm.flavor.cluster-master-vm 1 COUNT
                                                                                                vm.cpu 4 COUNT
                                                                                                vm.memory 8 GB
                                                                                                vm.cost 1 COUNT
66a81400-bd4a-412a-ba63-5f5ea939a4bc  mgmt-vm-ec-mgmt-192-168-110-512f3de       vm              vm 1 COUNT
                                                                                                vm.flavor.mgmt-vm-ec-mgmt-192-168-110-512f3de 1 COUNT
                                                                                                vm.cpu 1 COUNT
                                                                                                vm.memory 4092 MB
                                                                                                vm.cost 1 COUNT
e31e7b64-5acc-4b76-b3d5-57d03a672091  mgmt-vm-disk-ec-mgmt-192-168-110-512f3de  ephemeral-disk  ephemeral-disk 1 COUNT
                                                                                                ephemeral-disk.flavor.mgmt-vm-disk-ec-mgmt-192-168-110-512f3de 1 COUNT
                                                                                                ephemeral-disk.cost 1 COUNT
Total: 5

 

Note that I highlighted the two flavors to be adjusted, cluster-master-vm and cluster-other-vm. We’ll change cluster-other-vm to use 2GB memory and cluster-master-vm to use 2 cpu and 2GB memory.

Delete the cluster-other-vm and cluster-master-vm flavors.

Delete the cluster-other-vm:

$ ./photon  flavor delete 3ad4cd9c-2602-4c68-93af-9bb3f64b287c
Using target ‘http://<IP&gt;:28080’
DELETE_FLAVOR completed for ‘vm’ entity 3ad4cd9c-2602-4c68-93af-9bb3f64b287c

Delete cluster-master-vm flavor:

$ ./photon  flavor delete 3f3924ec-8495-4bc2-ab7b-1375489a5a5e
Using target ‘http://<IP&gt;:28080’
DELETE_FLAVOR completed for ‘vm’ entity 3f3924ec-8495-4bc2-ab7b-1375489a5a5e          

You can validate that that flavors have been deleted.

$ ./photon flavor list.

Now re-create the flavors with your smaller sizes.

Note WordPress does some funny things to the “” and — characters. If you run the command and it fails, please check those characters.

Create cluster-master-vm flavor:

$ ./photon -n flavor create –name cluster-master-vm –kind “vm” –cost “vm.cpu 2.0 COUNT, vm.memory 2.0 GB, vm.cost 1.0 COUNT, vm 1.0 COUNT, vm.flavor.cluster-master-vm 1 COUNT”

Create cluster-other-vm flavor:
$ ./photon -n flavor create –name cluster-other-vm –kind “vm” –cost “vm.cpu 1.0 COUNT, vm.memory 2.0 GB, vm.cost 1.0 COUNT, vm 1.0 COUNT, vm.flavor.cluster-other-vm 1 COUNT”

Now view your updated flavors!

$ ./photon flavor list.

 

Create your K8 Cluster!

Use the following photon CLI command to create your k8 cluster!

./photon cluster create -n kube -k KUBERNETES –dns 192.168.110.10 –gateway 192.168.110.1 –netmask 255.255.255.0 –master-ip 192.168.110.76 –container-network 10.2.0.0/16 –etcd1 192.168.110.70 -s 1

Using target ‘http://192.168.110.41:28080&#8217;
etcd server 2 static IP address (leave blank for none):

Creating cluster: kube (KUBERNETES)
  VM flavor: cluster-small-vm
  Slave count: 1

Are you sure [y/n]? y
Cluster created: ID = 5e2d16b4-37f9-47ec-bc9c-87d169f37eed                                          
Note: the cluster has been created with minimal resources. You can use the cluster now.
A background task is running to gradually expand the cluster to its target capacity.
You can run ‘cluster show 5e2d16b4-37f9-47ec-bc9c-87d169f37eed to see the state of the cluster.

I used the -s 1 to define 1 slave.  This is important since we only have one host in the lab. If you have more hosts then you can have more slaves like -s 2.

Note that instead of deleting the cluster-other-vm you could also just create a new flavor and use the flavor with the -v flag when creating the cluster.  For example: If I created a flavor called cluster-small-vm.

./photon cluster create -n kube -k KUBERNETES –dns 192.168.110.10 –gateway 192.168.110.1 –netmask 255.255.255.0 –master-ip 192.168.110.76 –container-network 10.2.0.0/16 –etcd1 192.168.110.70 -s 1 -v cluster-small-vm

We should now have a K8 cluster created.  Use the cluster show command and the ID returned as part of the cluster create step.  The cluster show command will tell you if the cluster is ready for use and the IP addresses for both etcd and the master.

$ ./photon  cluster show 5e2d16b4-37f9-47ec-bc9c-87d169f37eed
Using target ‘http://192.168.110.41:28080&#8217;
Cluster ID:             5e2d16b4-37f9-47ec-bc9c-87d169f37eed
  Name:                 kube
  State:                READY
  Type:                 KUBERNETES
  Slave count:          1
  Extended Properties:  map[container_network:10.2.0.0/16 netmask:255.255.255.0 dns:192.168.110.10 etcd_ips:192.168.110.70 master_ip:192.168.110.76 gateway:192.168.110.1]

VM ID                                 VM Name                                      VM IP            
537ad711-e063-4ebc-a0e5-1f874eb11a4b  etcd-5e435b15-9948-4b40-8f6c-4f8da9263a03    192.168.110.70
e02b9af8-fe66-4d9a-8b89-cbdf06ef548d  master-05ecf4fd-d641-4420-9d12-f73ebed3bff4  192.168.110.76

Run a container on K8 and scale it!

We’ll use a simple tomcat container and scale it on k8. Download the following files to a directory local to your photon CLI.

Tomcat Replication Controller

Tomcat Service

Load the files to create your Jenkins POD using kubectl.

$ kubectl -s 192.168.110.76:8080 create -f photon-Controller-Tomcat-rc.yml  –validate=false

$ kubectl -s 192.168.110.76:8080 create -f photon-Controller-Tomcat-service.yml  –validate=false

Validate your pod has deployed and is running

$ kubectl -s 192.168.110.76:8080 get pods
NAME                                                     READY     STATUS                                   RESTARTS   AGE
k8s-master-master-05ecf4fd-d641-4420-9d12-f73ebed3bff4   3/3       Running                                  0          15m
tomcat-server-2aowp                                      1/1       Running   0          6m

Visit the tomcat instance at http://192.168.110.76:30001

Now scale your pods and go crazy!

kubectl -s 192.168.110.76:8080 scale –replicas=2 rc tomcat-server

 

 

Blueprints as Code with vRA and Jenkins

Blueprints as Code with the Jenkins plugin for vRealize Automation 7 is now live on Jenkins-ci.  I’ve included an Infrastructure as Code demonstration that shows vRA 7’s blueprints as code capabilities when incorporated with a CI process.  Download the plugin for Jenkins and try it out!

Updated vRA plugin for Jenkins

vrajenkinsThe Jenkins plugin for vRA has been enhanced!   The plugin has matured since it’s inception as part of a hack-a-thon during the holiday’s. See below for a list of enhancements.

  • vRA deployments can now be called in Jenkins build environments, build steps or post-build actions
  • vRA deployments can now be destroyed as part of post-build actions.  Note that if the Jenkins build environment option is used, the deployments are automatically destroyed at the end of the build.
  • Information such as environment name, machine names with IPs and NSX load-balancer VIPs are written back to Jenkins as environment variables.

Environment variables and Jenkins

If you’re familiar with Jenkins then you should be familiar with environment variables.  The plugin writes environment information from vRA as environment variables in Jenkins.  Each deployment is assigned an environment variable allowing you to resolve the name.

for example:

VRADEP_BE_NUMBER_NAME : Provides the deployment name where NUMBER is an increment  starting from 1. The number corresponds to the order of deployments specified under the build environment section.

example: VRADEP_BE_1_NAME=CentOS_7-18373323

You can leverage the variables in build steps or post-build actions.  For example:  The post-build action to destroy a vRA Deployment takes the environment variable as a parameter for the “Deployment Name”

vRA_PostDestroy.png

The project page at github has additional information on utilizing the plugin and provides a deeper dive into the environment variables produced by the plugin.

If you have any questions feel free to post them on this blog or hit me on twitter.  If you find any bugs, then please log the bugs on the Jenkins project page.

Jenkins Plugin for vRA 7!

vrajenkins

Introducing the Jenkins Plugin for vRA 7!!!

So let’s get the first question that comes to mind out of the way…

Question:  Does the Jenkins plugin for vRA 7.0 compete with vRealize Code Stream?

Answer:  Absolutely not.  The Jenkins plugin for vRA 7 is targeted at teams that need basic integration with Jenkins and vRA.  The Jenkins plugin does not provide release management or release pipeline features. Teams requiring release management and release pipeline features should evaluate vRealize Code Stream to gain visibility and control over their release process.

Now that we’ve established the scope of the Jenkins plugin versus the rich features of vRealize Code Stream, lets cover a bit of history with vRA and Jenkins.  Prior to vRA 7 and the Jenkins plugin you only had two options for integrating Jenkins with vRA.

  1.  CloudClient:  The CloudClient for vRA can be used as an integration with Jenkins and vRA.   While the cloud client for vRA is a great automation tool for vRA, it wasn’t a seamless integration with Jenkins. The CloudClient had to be installed on the Jenkins slaves and then called as a shell build step.  Functional, but not really ideal.
  2. vRealize Code Stream:  Code Stream has an excellent plugin for Jenkins that allows you to kick off release pipelines which provision vRA blueprints.  This is an excellent option for Code Stream users.

I’ll stress that the Jenkins plugin for vRA is only compatible with vRA 7. The release of vRA 7 brought enhancements to REST API that makes integrating REST calls with vRA 7 simple.   If you are using an older version of vRA (5.x or 6.x) and want Jenkins integration, then you will have to use the CloudClient or Code Stream.

The Jenkins plugin for vRA 7 has been included in the official Jenkins plugin repository and can be installed just like any other Jenkins plugin.  For instructions please visit the plugin’s wiki.  The plugin is open source and the code is available on GitHub.

If you are running vRA 7 and want Jenkins integration, then check out the Jenkins Plugin for vRA 7.