Recently someone sent out a request via Twitter on how to automatically update images. I suggested using Hashicorp Packer but not with the default vmware-iso builder but the one created by the jetbrains-infra team over at Github because it uses the vSphere API instead of a SSH backdoor like the official vmware-iso builder does.

Infrastructure as code is a big thing nowadays. So I thought it would be nice to enable you to release your inner DevOps engineer. And demonstrate how to implement this awesome tool.

First thing you need to do is download Packer for your platform. There are versions available for macOS, FreeBSD, Linux, OpenBSD, Solaris and Windows. The great thing about Packer is you don’t really need to install it. It’s just a single executable that you can extract to a location of your choice.

Next you need to grab the jetbrains-infra vmware-iso builder. Beware that the jetbrains-infra builder is available for macOS, Linux and Windows only. In my case I’m using Windows so I got the .exe and moved it to the location of my Packer executable. Once all this is in place you can start building your first VM. In order to build a VM you need the build instructions.

To get up-and-running fast I decided to use one of the examples; an Ubuntu VM. I copied over the example files, updated the username and password values and ran Packer for the first time. Unfortunately……. ERROR.

Build 'vsphere-iso' errored: default datacenter resolves to multiple instances, please specify

==> Some builds didn't complete successfully and had errors:
--> vsphere-iso: default datacenter resolves to multiple instances, please specify

==> Builds finished but no artifacts were created.

Turns out that this was simply a matter of RTFM; all objects that are not unique need to be defined explicitly. In my case this meant adding the following options to the example JSON file:

  • datacenter
  • cluster
  • resource_pool
  • datastore
  • network

Now the network property demanded some extra attention as the same portgroup name was used multiple times in my environment. If this is the case you can use the following format to define the network: “folder/portgroupname”. With this last issue sorted out I succesfully built my first VM with Packer.

H:\packer_1.3.3_windows_amd64>packer build ubuntu-16.04
vsphere-iso output will be in this color.

==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
==> vsphere-iso: Mount ISO images...
==> vsphere-iso: Creating floppy disk...
    vsphere-iso: Copying files flatly from floppy_files
    vsphere-iso: Copying file: H:\packer_1.3.3_windows_amd64/preseed.cfg
    vsphere-iso: Done copying files from floppy_files
    vsphere-iso: Collecting paths from floppy_dirs
    vsphere-iso: Resulting paths from floppy_dirs : []
    vsphere-iso: Done copying paths from floppy_dirs
==> vsphere-iso: Uploading created floppy image
==> vsphere-iso: Adding generated Floppy...
==> vsphere-iso: Set boot order temporary...
==> vsphere-iso: Power on VM...
==> vsphere-iso: Waiting 10s for boot...
==> vsphere-iso: Typing boot command...
==> vsphere-iso: Waiting for IP...
==> vsphere-iso: IP address:
==> vsphere-iso: Using ssh communicator to connect:
==> vsphere-iso: Waiting for SSH to become available...
==> vsphere-iso: Connected to SSH!
==> vsphere-iso: Provisioning with shell script: C:\Users\BOFH\AppData\Local\
    vsphere-iso: bin
    vsphere-iso: boot
    vsphere-iso: dev
    vsphere-iso: etc
    vsphere-iso: home
    vsphere-iso: initrd.img
    vsphere-iso: lib
    vsphere-iso: lib64
    vsphere-iso: lost+found
    vsphere-iso: media
    vsphere-iso: mnt
    vsphere-iso: opt
    vsphere-iso: proc
    vsphere-iso: root
    vsphere-iso: run
    vsphere-iso: sbin
    vsphere-iso: srv
    vsphere-iso: sys
    vsphere-iso: tmp
    vsphere-iso: usr
    vsphere-iso: var
    vsphere-iso: vmlinuz
==> vsphere-iso: Shut down VM...
==> vsphere-iso: Deleting Floppy drives...
==> vsphere-iso: Deleting Floppy image...
==> vsphere-iso: Eject CD-ROM drives...
==> vsphere-iso: Clear boot order...
Build 'vsphere-iso' finished.

==> Builds finished. The artifacts of successful builds are:
--> vsphere-iso: example-ubuntu

I didn’t like the fact that your vSphere credentials are readable so I looked for a way around this. The easiest option was to use user variables in the template. I created a new JSON file with two variables: vcenter_username and vcenter_password. And I included those variables in my template (don’t worry I’ll display the final result on the bottom of this post). On all subsequent builds I used the -var-file parameter to include the variable file which containes my username and password. Having an option to use hashed credentials would be preferable but this was an easy fix.

So to summarize, if you want to start using Packer with vSphere you’ll need to do the following:

  • A copy of Packer
  • A copy of the jetbrains-infa vsphere-iso builder
  • A build template
  • A variable file containing vSphere credentials (optional)

When you put all these things together you can start building all your VM images from code. And from there who knows what cool things you can do? Maybe automatically convert the generated VMs to a template and put them in a content catalog. Subscribe to that catalog from all your vCenters, vCloud Director instances. The sky is the limit!

This is the final product, an optimized Ubuntu build template. Happy coding.

  "builders": [
      "type": "vsphere-iso",

      "vcenter_server":      "myvcenter",
      "username":            "{{user `vcenter_username`}}",
      "password":            "{{user `vcenter_password`}}",
      "insecure_connection": "true",

      "vm_name": "example-ubuntu",
      "notes": "Some random notes",
      "datacenter": "mydatacenter",
      "cluster": "mycluster",
      "host": "myesxihost",
      "datastore": "mydatastore",
      "network": "myfolder/myportgroupname",
      "resource_pool": "myresourcepool",

      "guest_os_type": "ubuntu64Guest",

      "ssh_username": "jetbrains",
      "ssh_password": "jetbrains",

      "CPUs":             1,
      "RAM":              1024,
      "RAM_reserve_all": true,

      "convert_to_template": false,

      "disk_controller_type":  "pvscsi",
      "disk_size":        32768,
      "disk_thin_provisioned": true,

      "network_card": "vmxnet3",

      "iso_paths": [
        "[mydatastore] ISOfolder/ubuntu-16.04.1-server-amd64.iso"
      "iso_checksum": "af5f788aee1b32c4b2634734309cc9e9",
      "iso_checksum_type": "md5",
      "floppy_files": [
      "boot_command": [
        " initrd=/install/initrd.gz",
        " priority=critical",
        " locale=en_US",
        " file=/media/preseed.cfg",

  "provisioners": [
      "type": "shell",
      "inline": ["ls /"]

Rudolf Kleijwegt

I am an experienced IT professional with over 20 years of hands-on experience designing, deploying, and maintaining IT infrastructure in both enterprise and service provider environments. My skills span across Linux and Windows and a multitude of server applications, allowing me to excel in a wide range of IT roles. Currently, my primary focus is on Software Defined DataCenter and DevOps. I am passionate about staying up to date with the latest trends in the industry to achieve superior outcomes.


KOSTIANTYN KHOMKO · May 10, 2019 at 12:12 am

What context is for “{{template_dir}}/preseed.cfg” and what is template dir?

    Rudolf Kleijwegt · May 13, 2019 at 10:05 pm

    The template_dir is a built-in function that references to the location where the template that is being executed is stored. In this example it is used to indicate where the preseed.cfg file can be found. Packer uses this file to create a floppy image that contains this preseed (answer) file so it can be used for an unattended installation of Ubuntu.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *