In my earlier post on using Spark for financial analysis, we were either running the code within IntelliJ or doing a manual Gradle build. In this post, let’s take a look at how we can leverage Jenkins pipeline for automated builds.
The code discussed in this post is available in Github.
Jenkins is a web application and can be run inside a Docker container. Pull the Jenkins Docker image from the Docker hub and run the container using following command. This will start a container and run the Jenkins application on the localhost on port 8080.
docker run -p 8080:8080 -p 50000:50000 -v jenkins:/var/jenkins_home jenkins/jenkins:lts
Create a pipeline project and select the option to fetch the Jenkins pipeline script from Git. Credentials for connecting to remote repository would be required. The script is defined in the default file names “Jenkinsfile”.
The Jenkinsfile contains the pipeline code defined using Groovy script and will be pulled from the remote repository when the build is triggered. This approach results in having the CI/CD pipeline code version controlled in SCM for easy maintenance and tracking the changes to the software delivery process.
There are multiple stages defined in the pipeline that would be executed on the Jenkins machine. Here we are first connecting to Github and pulling the latest code. Then we are running the Gradle build to create the build artifact (Jar file) for running the Spark Financial Analysis application.
#!/usr/bin/env groovy node{ stage('Prepare'){ cleanWs() git( url: 'git@github.com:asardana/spark-financial-analysis.git', credentialsId: '8af17cba-4867-4f7d-be22-ea4f1bb16591', branch: 'master' ) } stage('Build'){ if(isUnix()){ echo 'unix os' sh 'pwd' sh './gradlew clean build -xtest --info' } else{ echo 'not a unix os' } } }
Finally, run the Jenkins build to execute the pipeline code.