An Interactive Data Science / Engineer / Analyst CV Built with Vue 3 and Google App Engine

Courtney Perigo
7 min readOct 20, 2023
Photo by Shahadat Rahman on Unsplash

If you’re reading this, you may have either stumbled upon my website and want to learn more about the tools I chose to develop it; or you’re thinking about building your own website — and want to know what a personal, interactive CV is all about.

Either way, you’ve come to the right place. In this blog, I am going to do a quick overview of my interactive CV build, and give some reasons for the tools and technologies I’ve chosen.

There’s an ocean of options, so why this build?

This is not my first rodeo. I have owned a now retired no-code website built with one of the major content management frameworks (like Wix, WordPress, etc.) These tools are an excellent choice for a CV because you can choose a beautiful template and fill it with your projects, links to interesting articles, and showcase your resume.

The thing with a no-code website — is it is no-code. It isn’t the best tool for showcasing the technical side of what a data scientist or data engineer can do. I decided to retire my old website and take a shot at building, from scratch, a website that can show off my work.

By taking a new approach, I could show potential employers my ability to break down a problem, make decisions on technologies, and create a full stack application that shows off my skills.

An Interactive CV Shows Off Data Engineering and Data Science Skills

By building an interactive CV, it gives me the flexibility to show off my engineering skills and how I create opportunities for continuous integration and continuous delivery of my CV content.

The project also shows off my data science skills by allowing me to either link to publications , or stand up small web apps that add value to my website through machine learning and data visualization.

Data Engineering Showcase 1: Google Cloud Build

For engineering, this project deploys a continuous integration and continuous delivery pipeline from GitHub via the Google Cloud Build service. This service looks for pushes to the main branch of my website’s GitHub repo and Google Cloud Build deploys automatically into Google App Engine.

A yaml example showing the instructions for Google Cloud Build.

steps:
# create environment variables for the vuejs app
- name: node
dir: "courtney-perigo-dotcom"
entrypoint: npm
args: ["run", "create-env"]
# install the requirements
- name: node
dir: "courtney-perigo-dotcom"
entrypoint: npm
args: ["install"]
# build the application
- name: node
dir: "courtney-perigo-dotcom"
entrypoint: npm
args: ["run", "build"]
# deploy the application
- name: "gcr.io/cloud-builders/gcloud"
dir: "courtney-perigo-dotcom"
args: ["app", "deploy"]
timeout: "1600s"

A full step-by-step of this process is outside of the scope of this blog post, but is described by Eddy Goh in his Medium article: https://medium.com/google-cloud/guide-to-deploy-vue-js-app-to-google-app-engine-with-cloud-build-from-git-repository-256c3043155e

Data Engineering Showcase 2: Google App Engine

Google App Engine is a fantastic, easy-to-use platform for developing and hosting web applications. The service is the perfect combination of security, scale, and cost to fit the budget of my project. In my case, I am using Google App Engine to host the website itself and any analytics microservice I choose to build.

Google App Engine has a standard deployment option that allows for scaling any service to 0 instances so my bill is $0.00 when a service is not seeing any traffic. It can spin up any service in less than a second when traffic is served. At the moment, my website is hosted for less than the cost of a Starbucks trip per month.

A yaml example showing the instructions for a Standard Deployment in Google App Engine.

# courtney-perigo-dotcom app.yaml file contents:
runtime: nodejs18

instance_class: F1
automatic_scaling:
min_instances: 0
max_instances: 1

handlers:
# Serve all static files with urls ending with a file extension
- url: /(.*\..+)$
static_files: dist/\1
upload: dist/(.*\..+)$
secure: always
# catch all handler to index.html
- url: /.*
static_files: dist/index.html
upload: dist/index.html
secure: always
# env_variables: add these later if needed

Leveraging the multi-service deployment options also allow me to simulate a microservices framework for my website. This gives me the flexibility to build APIs that can power data science projects in the future. At the moment, I have a microservice for showing off my Strava activities in a data visualization feature.

Data Engineering Showcase 3: Multiple Languages

Another skillset I wanted to show off was my ability to develop production quality code in multiple languages. My interactive CV leverages three different programming languages (plus HTML and CSS) to bring it all together. Google App Engine allows me to stand up services in NodeJS (Vue), GoLang, and Python.

A snippet of my Go API showing a transformation of my Strava Activity data into a json format for visualization.

var finalActs FinalActivities

for _, a := range athActs {
var finalAct FinalActivity
finalAct.Distance = a.Distance
finalAct.MovingTime = a.MovingTime
finalAct.StartDate = a.StartDate
finalAct.StartDateLocal = a.StartDateLocal
finalAct.TimeZone = a.TimeZone
finalAct.UtcOffset = a.UtcOffset
// convert zulu string time to unix time
time_temp, err := time.Parse(time.RFC3339, a.StartDateLocal)
if err != nil {
fmt.Println(err.Error())
panic(err)
}
finalAct.StartDateUnix = int(time_temp.Unix())
miles := a.Distance * 0.000621371
finalAct.Miles = miles
minutes := float64(a.MovingTime) / 60
finalAct.Minutes = minutes
pace := minutes / miles
hanging_decimal := pace - float64(int(pace))
seconds := float64(math.Round(hanging_decimal * 60))
finalAct.Pace = pace
pace_down := float64(int(pace))
if seconds < 10 {
finalAct.DisplayPace = fmt.Sprintf("%.0f:0%.0f", pace_down, seconds)
} else {
finalAct.DisplayPace = fmt.Sprintf("%.0f:%.0f", pace_down, seconds)
}

finalActs.Data = append(finalActs.Data, finalAct)
}

c.IndentedJSON(http.StatusOK, finalActs)

Data Science Showcase 1: Portfolio and Writing Samples

An important part of an interactive CV is linking out to the latest articles and featured projects in your portfolio. My website features a flexible data structure to programmatically create content and user interface features.

An example of this is my skills feature. As I build new skills, I can create badges that showoff these skills on my CV page. My Vue app reads my data structure and generates badges for each entry in my “resume_skills” list. I use a similar strategy for publications, jobs, and featured content so I can maintain it easily within my code. The Cloud Build automation pushes any new content directly to production.

To increase scale and flexibility, I can host these data structures in a database — if I ever choose to.

Skills feature on https://courtneyperigo.com/cv (image from author)
resume_skills: [
{
name: 'Applied Data Science',
icon: 'fas fa-flask'
},
{
name: 'Data Engineering',
icon: 'fas fa-wrench'
},
{
name: 'AI and ML Deployment',
icon: 'fas fa-robot'
},
{
name: 'Business Intelligence Architecture',
icon: 'fas fa-chart-line'
},
{
name: 'Cloud Infrastructure',
icon: 'fas fa-cloud'
},
{
name: 'Python',
icon: 'fab fa-python'
},
{
name: 'R',
icon: 'fas fa-chart-bar'
},
{
name: 'SQL',
icon: 'fas fa-database'
},
{
name: 'GoLang',
icon: 'fas fa-laptop-code'
},
{
name: 'Javascript',
icon: 'fab fa-js'
},
{
name: 'Servant Leadership',
icon: 'fas fa-battery-full'
}
]

Data Science Showcase 2: Data Visualizations and Applications

A CV that only shows a resume and writing samples doesn’t bring to life the value you can create for organizations. To show what you can do, why not build out some data-driven applications on your website?

Due to the flexible microservices architecture of the underlying technologies and services of my interactive CV, I can spin up showcases of data-driven apps pretty easily. I can extend the functionality of my website on my timeline — taking an agile development approach to future capability.

The first of these data science features is my Strava Running Data Visualization. This application connects to the Strava API, formats the data, and visualizes the last 30 activities. I’ve automated this application so it programmatically pulls the latest data and shows it to my end users.

Strava Running Data Visualization on https://courtneyperigo.com (image from author)

In the future, I plan to also showcase several machine learning models to create value on my interactive CV including recommended news articles and papers — or something fun like a generative AI cocktail bot.

A Fun, Rewarding Way to Show Your CV

An interactive CV is a fun, rewarding way to really show what you can do to others. This strategy can feature engineering pipelines, data visualizations, and really add color to what would typically be a boring CV website.

This project was a lot of fun to build, and I expect it will be a great side project to continue to show off what I can do. If you want to see how it’s done, please visit my website and Strava repos on GitHub:

--

--

Courtney Perigo

#Analytics, #Data, #MachineLearning and marketing #Research Pro | #datadriven SVP of Data Strategy @cramerkrasselt www.courtneyperigo.com