Development

Developer Devolution: Why I Stopped Using Vagrant

Vagrant is an awesome tool and works really well–for most applications. At WDS, a LOT of our peeps use Vagrant for their day-to-day activities, and chances are you’ve seen Brad ParbsVV at some point, or may be using it right now. However, there comes a turning point in every developer’s life where he/she must evolve, or, in some cases, devolve. So here’s my story.

My Development Evolution

When you start developing, you’ll quickly realize you need to get some sort of development environment up. In the beginning, I was actually using direct FTP before I had heard of GIT or any sort of version control. I started out by using Dreamweaver for FTP and syntax highlighting, this is back when they were owned by Macromedia.

It wasn’t until about a year before WDS that I started using Sublime 2, and slowly evolved into using PHPStorm, which I now use daily. However, my coding application isn’t all that evolved. Upon starting with WDS I was actually using Xampp, since I’m a Windows guy. Xampp for me was my go-to development environment for the longest time considering its ease of setup (just download a zip file) and overall reliability.

The Vagrant Story

Coming from Xampp, Vagrant is a godsend, especially for us non-Linux folks using Windows. For the longest time, Vagrant fans preached to me; it’s SO MUCH BETTER they said, and indeed it was. With Xampp, I had to edit the vhosts file, create a database, and download and install WordPress by hand any time I wanted to create a new site.

For awhile it worked fine, but then came my never-ending thirst for knowledge. I thought, “Hmm, so what is this Vagrant?”

I decided on a whim to install it and give it a go. After about a two hour session with Parbs on getting set up, it was installed, and I was so happy. Now, I only needed to run one nifty little command vv create and I was up in just a few minutes.

About a year after installing and using Vagrant regularly for my day-to-day environments, I’m eternally grateful for the speed with which it sets up projects and its simplicity–no more juggling vHost files/settings or setting up SQL databases! Overall, I was happy in my ignorance regarding how to actually do most of this myself.

Vagrant’s Demise

Earlier this year, PHPStorm released an update that allowed project-level interpreters, as well as the ability to use PHPcs remotely, which is absolutely amazing! For my day-to-day, I actually set up Vagrant as a remote interpreter, set it to use the PHPcs installed there, and it pretty much worked as intended.

However, over time, with more projects, Vagrant started to slow down. Taking upwards of twenty to thirty minutes to either up or halt the VM is just absolutely absurd! If I wanted to take my laptop somewhere and work (in sunlight), I was forced to copy the required databases and files/Git to the laptop just to go somewhere. And believe me, some projects are gigabytes of data.

This presented a REAL problem for me. By nature, I’m not that mobile; I tend to work in front of my quad monitor setup daily from nine to five, and if life threw me a curve ball, I had to adapt or take a personal day. Having to ‘prep’ to leave sometimes would take hours depending on which project I was scheduled for. Failing to meet a deadline is absolutely unacceptable, regardless of the reason.

Project Overload

Vagrant is a box. We know this. It’s a virtual machine that is meant to operate as if you’re looking at a completely separate machine. The first and most obvious benefit to a Windows user like me is that I can comfortably develop for a Linux OS without having to install Linux. However, the amount of projects we work on daily at WDS varies; we have our slow days, and we have crunch time. Just like any other developer, I typically leave a site installed on Vagrant until our client is out of the support phase.

This is the problem, at any given time I have ten to twenty sites online at the same time, on one Vagrant box. The startup time for the box was just horrendous; I could literally go make a sandwich, drive to Starbucks, and back again, and it still wouldn’t be done. When Vagrant starts up, at least from my understanding, it has to go through a sort of ‘provisioning’ startup process. Reading all those configs, starting up the database, etc…the amount of time it took was just unreasonable.

Halting the virtual machine was even worse. Have you ever tried backing up 20GB of databases on 2GB of ram? If so, I’m sure you feel my pain. During shutdown, Vagrant has to backup the databases, which led me to simply leave it on for quite some time. However, the longer it was on, the more problems I had–MySQL out of memory, PHP freezing, and more. This just added to my development time.

Lack of Mobility

For me, as I said, mobility with a Vagrant install is limited. There are definitely options out there however, like “Moving VirtualBox and Vagrant to an external drive,” so it can be done if you’re okay with lugging around a hard-drive or thumb-drive. For me, that’s unacceptable; this is the age of the cloud, yeah? Having to take a hard-drive with me was not going to happen.

This led me to ‘backing up’ my SQL databases, project files (including .git files), and even in some cases the entire /uploads directory onto my little 16GB thumb drive. Copying over these things would take hours, especially if I had more than one project for that day. This, in turn, led to me having to pretty much prep hours ahead of time, and with three kids, that’s not much sleep time if I have to get up more than two hours early.

The Solution

A fully remote development environment.

For about two years I’ve owned a rack server from SoYouStart.com. It has mainly held my gaming servers and my personal websites. However, with a whopping 2TB of space, I was utilizing a mere fraction of what was available, so with the additions of the new PHPStorm features, I figured I’d try it out.

For security reasons, I obviously don’t keep the GIT credentials on my server. I’d really hate for it to get hacked or something and someone gain the ability to push/pull code changes, so I do ‘technically’ have all the code locally as well.

Having it setup in this manner allows me to keep the code and databases in one location for every project. This means if I need to just grab the laptop and go, now I can do that. I don’t have to worry about bringing a database and/or files to sync on my laptop–all I need to do a pull when I sit down.

Additionally, if I need to illustrate something to another developer or designer, or if something is broken on my local versus the client’s site, it’s easier to debug. Utilizing a remote server allows users the ability to actually view my development environment instead of limiting us to just a Skype screen-share.

And that’s why I stopped using Vagrant.

Have any of you had the same issues? Who else has tried a fully remote development environment?

Comments

24 thoughts on “Developer Devolution: Why I Stopped Using Vagrant

  1. One downside I see is that you’re completely hindered from offline development, correct?

    I run a Vagrant install for the very large multisite I run for my day job. I run dev copies of all my client sites using MAMP and Beanstalk. DB syncing is an issue, but I really only need to export the production DBs and SFTP /uploads every once in a while to get synced up.

    Admittedly this is still pretty crude, but way better than what I was doing back in my “cowboy” days.

    Out of curiousity, why didn’t you give Docker a whirl?

    1. @Tom,

      It is absolutely true that I’m online only. At the time I hadn’t heard of Docker. And with Storm’s new feature of remote development environments, I just had to give this a whirl.

      It gave me the opportunity to learn more about setting up NginX as well as the flexibility of not having to sync gigabytes of databases between devices.

  2. Honestly, it sounds like you were just using a Vagrant box that wasn’t very well architected. I’d assume VVV from your mention of VV. The big problem with that box is the lack of an idempotent provisioning setup. Meaning every time you create a new site and reprovision, it starts from ground zero and installs/updates everything. That right there is where all your time is going.

    I have a box I use that is setup nearly indentically to VVV (e.g. folder structure, database backups, etc.), but provisions new sites in under 1 minute because it uses Chef as a provisioner instead of bash. It’s not really Vagrant’s fault that a poorly architected box is slow to provision.

    1. Hey Zeke,

      I was using VVV, and for me the problem was speed with halting and upping the VM. All instances were on one box, so as you can imagine, the SQL server instance grew quite large. So halting would take up to 40+ minutes.

      Add to that, if I needed to sync my laptop, i was forced to download the ‘database of the day’ to keep my changes in sync. Which required me to prep the day before. Sometimes that’s not possible, as life throws curveballs once in awhile, so I would find myself spending company time to copy over databases and images.

      For some clients, this just wasn’t feasible to continue with. Databases reaching 1GB in size, and the media library… well I don’t even want to go there.

      Overall, I’ve always been intrigued by the internals of setting up a web service. I’ve done it with Apache, and I’m somewhat of a linux guy, so I figured why not just setup a remote environment on my server?

      Typical security measures are in place like ModSec, Firewalls, and I obviously lock down the dev environment with a password ( to prevent indexing etc.. ) Couple that with a few other security measures, I’m pretty much fort knox.

      Admittedly there is a downside to this, I’m not allowed to work offline. But let’s be honest, unless I’m in the middle of the desert with no cell phone, I’m always going to have internet access.

      Thus far, I’ve had only one major problem with this setup, it was my partitioning on the hard drive which led to the mysql database running out of space. But overall, my mobility is where I like it to be so far.

      1. Have you tried just using a rewrite rule in htaccess/nginx/wp to redirect media from your local url to the production url? making copying media around a thing of the past?

      2. I have, but the overall problem I had was the database. Larger clients have larger databases, so having to mirror the database when I barely use my laptop, could take an extra hour or so depending on the different type of projects I was scheduled up for during that day.

  3. The biggest difference between your described Vagrant setup and how I use it is probably that you were running all the sites inside 1 VM. I create a vagrant machine for each project and its vagrant configuration is included with the repo.

    I’m a bit perplexed by you saying your halts take a long time. My halt takes seconds for any of my projects. Now, I could see a destroy taking longer perhaps if you’re dumping out your databases so they will be preserved the next time you up the machine. I generally don’t destroy my machines that often though, unless I’m not going to be working on the project for an extended period of time. I’m not really constrained by disk space, so having them on my machine but stopped isn’t really resource intensive. I suppose it would depend a bit on your vagrant config, but with how I set mine up, dumping the database for a `vagrant halt` is not necessary. It will be exactly how I left it the next time I do a `vagrant up`.

    A `vagrant up` for a VM that was destroyed or hasn’t been created locally yet can be a bit time consuming, I admit. But I think my biggest takes in the 15-20 minute range, and that one has a HUGE database (multi-millions of records in the DB). Most of my WP projects can get up and running in sub-10 minutes from a “cold start” (i.e. the machine wasn’t just halted, but is needing to be created from scratch). A project that is doing a warm start (i.e. was just halted), is 1-2 minutes even for the largest project I have).

    Anyhow, I’m not really trying to convince you to switch back. As developers, we all have our preferences and if you are happy with the remote dev environment then, awesome! Whatever makes you the most productive is what really matters.

    I was only looking to highlight that you might could get things working with a different vagrant setup. I’m probably a bit odd in that I’ve not really liked the various boxes I’ve seen out there for WP (VVV, etc). My goal with Vagrant is to give a new developer everything they need to get up and running when they clone the repo. General workflow for the projects I create is: clone the repo, vagrant up, vagrant ssh, start gulp. The exception being on large projects they might need to download the database dump from somewhere else as we don’t commit really large DB’s to git. But that’s just a matter of download the file, and put it in the right place. The Vagrant VM knows to look for it and will load it if the file exists.

    Definitely think it’s good that you’re exploring though and finding what works best for you. Even better that you’re sharing your experience for others that might want to try something else that might work better for them.

    1. The overall issue for me was the mobility. I was just taking too much time to up and halt the machine, then, if I have to go somewhere, I’d have to prepare either the day before by moving the databases and images ( if any ) to the other laptop.

      Some client databases ( and images ) were just too large to do that in an adequate amount of time. Then you add to that sometimes on the laptop the instance wasn’t even there, it would have to be created.

      Sometimes, I wasn’t even given a notice I would have to be mobile, so I was using company time to move stuff to the other device.

      Overall it was very frustrating to me as a developer to have to sit and wait for things. I live on the keyboard, and having to wait was just… well… boring!

      One major issue I noticed with the slowdown is what you pointed out, multiple instances on the same VM. At one point I had 20 sites installed on the box. So during my move over to a full remote dev environment I did a few tests by deleting instances and checking up/halt times.

      I now have one project that requires vagrant ( purely because I don’t want to fiddle with my NginX rules ), so I have only one instance on the box. Database size is about 300MB for this install, and up/halt times is amazingly fast and responsive.

      Therefore, I suppose the conclusion to the vagrant issue is that the box was poorly configured ( out of the box ) for a large amount of instances to run at once.

      With my current setup, I’ve found a few annoyances using our git branch model. The main being Storm auto-uploads every time I change a branch, and if a branch is grossly out of date, it can take about 5 minutes to upload.

      I’ve only had one project so far that’s given me this issue, so I would say for now, this move is a success for me.

      I think if I could find a full proof ( low interaction ) way of synching databases and media files between devices, and speeding up the up/halt of vagrant I would give it another go.

      Until that future comes, however, I’ll stick with this current setup since I’ve learned a lot so far about how to configure NginX and a few more internals of the sys admin world.

  4. I guess this is fine for lone-wolf developers, but remote-only development is pretty difficult when you have a distributed team. It sounds like you adopted an environment that people working in teams might love and applied to solo work. On a team, local environments and repo’s keep us from killing each other (for the most part). If we only worked remotely we would need each developer to have their own remote machine, which is fine except we all have local machines already. I was wondering if you have tried any of the vagrant plugins that make it easier to switch between machines/sites like flip. https://github.com/bradp/VVV-Provision-Flipper/blob/master/flip

    I’ve seen other people also use modified provisioning scripts that just do the bare minimum instead of everything possible, and they say this speeds things up a lot. I generally have a lot of task throughput (logging, writing notes etc) so I don’t mind letting the VM spin up, but I doubt you have to do any of that, so you end up just waiting around. I guess the point is that if speed is your problem there are a lot of ways to tweak vagrant and add plugins that will mitigate the issue, otherwise like Tom mentioned, you might fall in love with Docker images and shipping code from local to remote in the blink of an eye.

    We do have a one-way database model to address what Tom pointed out. Tom, we generally ONLY make database changes remotely and then sync the db to our local environments, that way we never have to merge conflicting databases. It causes some code to have to be staged in phases, but it’s worked well for the most part. We even have a CLI add-on that lets us pull a fresh copy of the db when we provision our vagrants.

    I’ve also been looking at cloud IDE’s (since you mentioned remote only), the prospect of pair programming with both developers driving is very interesting, so far nothing is panning out though.

    1. Don’t get me wrong, we have one central repo, but WDS gives us the flexibility of using our own environments for dev. We also have one central ‘dev’ server which the company shares and deployments are set to go to.

      Look at my setup as my ‘local’, I mean I know it’s a remote server, but it’s still my local environment. I never really got too deep into Vagrant. I know it’s super configurable, I just couldn’t dive into it.

      The main problem I seen with Vagrant is that out of the box the resource allocation was small for a multi-instanced box. At one point I had 20 instances on my VM – so during my move over to this new setup I’m using now, I decided to clean house.

      I deleted all but one instance ( website ) from the VM and once that was complete, the up/halt process of Vagrant dropped down to just 2 minutes from like 20.

      In retrospect, that’s probably the main problem I seen. To date I’ve not been able to find a way to mindlessly sync databases and images between devices. If I had known of a way to do that then, I would probably still be on Vagrant.

      To date, however, I’ve learned a lot about setting up NginX and writing server rules, their parameters etc… so it was still an education for me. It’s always nice to dive outside of your comfort zone into something new, so when given the chance, I do that.

      1. Thanks for explaining further, I can see that you actually put a lot more thought into this than I gave credit for. I hate to admit but I considered going remote dev when my vagrant slowed down shortly after this exchange.

        I just wanted to share a follow up; that after some more tinkering, I was able to really speed up my vagrant/virtualbox VM’s. There were a few pieces to this:

        1. convert the vagrant .vmdk’s to fixed size .vdi files (noticeable boost)
        2. defrag the new .vdi files (defraggler, small boost)
        3. install SSD hard drive and set storage type to SSD in virtualbox (huge boost, no lag at the start of vagrant commands fast vagrant up/halt overall)
        4. limit or remove synced folders ( significant boost, especially page load speeds)

        So between all of this, I was able to achieve no lag vagrant commands and my pages load in less than half a second for most project. The downside is that I stopped using synced folders and I treat my local dev server as if it were remote, but like you pointed out phpstorm’s new remote capabilities are great for this.

        It also forced me to really master my git command line usage which is a net positive. Waiting 20 minutes for vagrant commands is unacceptable, and so is waiting 7+ seconds per page load – no matter what environment you are working in, no developer should have to suffer this. I found most of my problems were related to the windows file system (not supporting nfs), and old sata drives.

        I was really surprised at just how much the synced folders thrashed my drive, so I’m totally happy ssh’ing into my own machine to do work, its exactly like the remote process anyway so everything feels consistent.

  5. I’m a bit late into the discussion, but, several years ago I decided to actual spend a lot of money and get a macbook pro for my development. Best decision I made and soon bought a Macbook air as well.
    I use MAMP Pro and use Dropbox to sync all my files between the two. It gives me the benefit of working in the cloud and also at the same time having the offline development option. It does take a bit of setting up the first time around, but it turned out to be a much better option for me that Vagrant as the setup is simply awesome.

    1. Sounds like exactly what I’m wanting to do. At this time, however, my PC is quite beefy and is built for gaming AND work. Moving to a Mac with 4 monitors and the amount of power I have would cost a nice fortune.

      How do you sync the databases? Do you have to do a dump and move the file to dropbox? Or do you have something magical?

      1. I was jammed my entire xampp install inside a Dropbox folder for like 10 months a few years back to try to acheive this but I stopped eventually due to regular conflicts. The files were mostly fine but the database tables would often wig out, i guess because they get written so fast & Dropbox locks files while writing changes. It got especially bad if I worked on my laptop offline and then got back online, I’m guessing due to timestamp conflicts created.

        My eventual solution was to install xampp in it’s default location, use git for projects files, Dropbox for media & assets (kinda like a local s3 store) and Heidi SQL for syncing databases.

        Heidi made it really easy to set up local & remote environments and sync between them with the push of a button. I still use it for quick manual jobs. It’s also free.

        Bear in mind this set up was specifically to resolve conflicts easily, host like 20 sites locally with minimal performance drag, and, mostly, so I could work on the subway. If the laptop was on in the background & remembered to pull in changes from git & the db before I hit the road, offline was super easy.

        Anyway, thanks for the great article, imma keep paying attention. Keep us posted o workflow developments.

  6. Hi Jay!

    Thanks, it’s nice thing for me to take under consideration 🙂

    Ever thought about making an article about good Windows webdev workflow since it’s hard out there for Windows people (at least in my experience)? I haven’t been able to find much info in that matter.

    Love your work! Thanks

    1. Thanks Adam, yes that article is next on my list. And I don’t want to put Vagrant out altogether, it’s great for small projects.

      But as Matt said above, you may get more preformance to use one box per site ( or even a few sites per box ) versus what I was doing ( 20 sites on one box ).

      My issue also boils down to the server/VM config ( as pointed out elsewhere ). But I think overall I’m still happy with moving to a fully remote dev environment, instead of locally. Less logistics involved in copying files and databases that way I think.

  7. Jay,

    I know I’m a little late to the party but would you mind sharing how you went about setting up your sites on your rack server? I would love some insight there. Thanks for the article though. I’ve been doing dev either locally or on a remote subdomain so this sounds like a great solution.

    1. Honestly my remote server is setup much like most hosts you’d expect. Aside from the fact that I only have one user directory and all my dev sites are under that. You’ll want to start looking into “Virtual Host Configurations” either for Apache or Nginx, whichever you’re using. This is extremely beneficial when working with multiple projects. There’s also tons of tutorials for “Setting up a LAMP rack server” out there ( Apache ).

      I’d say if you’re new to server setups start with Apache, it’s easier and there’s more out there to help you. I honestly don’t think there’s a ton of room for me to detail all of this in the comments haha!

  8. I hate to troll but I think this is more a problem in how you set things up. Let me share my version, I have two devices, my 64gb ram 10 core PC and my 32 GB 4 core laptop. My development environment needs to move between the two with nothing more than a nudge, the only problem I have right now is finding a command to close all my vagrants at once, right now I have to do the annoying thing of pressing the shortcut on every phpstorm.

    I can move my development environment to my laptop with doing nothing more than installing virtualbox, the only application I must install.

    The key is I use a Samsung t3 2tb SSD external drive, I keep all my data and applications on it, even vagrant is portable when you move its folder.

    I have about 40 projects on my phpstorm, each has a vagrant, up to 12 can be online at once, they all close the same speed, a few minutes. No backup needed, the data is all on the disk, I just have a backup script on my PC which Delta updates my external drive index maybe once a week, so if the disk gets lost I can just buy a new disk, this solves many of the problems you have, to being offline working and another being putting sensitive information on your remote server, which the t3 can solve with hardware encryption.

    So I hate to seem like a troll but I would say you didn’t think about your set-up right.

  9. You can solve all these problems by just using an external drive, and making a vagrant per project instead of trying to make vagrant do multiple projects. That way the only application you have to install on a new computer is virtualbox, but there is a way around that too. It also solves all the problems of your set-up, including putting other people’s data in a place that it might not be allowed, I use a t3 2tb SSD which has in-built encryption.

    The only issue I have right now is closing all vagrants at once and not having to close each one in phpstorm.

  10. Sorry, I also just read it again and saw you mention pair debugging, instead of giving possible credentials to random team mates for your remote debugger you can just use ngrok which is give your computer a public resolution on the internet, essentially doing the same as making it remote, but local to you and easier to manage and also the names are disposable, which is better in security

Have a comment?

Your email address will not be published. Required fields are marked *

accessibilityadminaggregationanchorarrow-rightattach-iconbackupsblogbookmarksbuddypresscachingcalendarcaret-downcartunifiedcouponcrediblecredit-cardcustommigrationdesigndevecomfriendsgallerygoodgroupsgrowthhostingideasinternationalizationiphoneloyaltymailmaphealthmessagingArtboard 1migrationsmultiple-sourcesmultisitenewsnotificationsperformancephonepluginprofilesresearcharrowscalablescrapingsecuresecureseosharearrowarrowsourcestreamsupporttwitchunifiedupdatesvaultwebsitewordpress