If this is your take your exposure has been pretty limited. While I agree some devs take it to the extreme, Docker is not a cop out. It (and similar containerization platforms) are invaluable tools.
Using devcontainers (Docker containers in the IDE, basically) I’m able to get my team developing in a consistent environment in mere minutes, without needing to bother IT.
Using Docker orchestration I’m able to do a lot in prod, such as automatic scaling, continuous deployment with automated testing, and in worst case near instantaneous reverts to a previously good state.
And that’s just how I use it as a dev.
As self hosting enthusiast I can deploy new OSS projects without stepping through a lengthy install guide listing various obscure requirements, and if I did want to skip the container (which I’ve only done a few things) I can simply read the Dockerfile to figure out what I need to do instead of hoping the install guide covers all the bases.
And if I need to migrate to a new host? A few DNS updates and SCP/rsync later and I’m done.
I’ve been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We’re an Ops team though so it’s a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.
Agreed there – it’s good for onboarding devs and ensuring consistent build environment.
Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…
But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.
And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.
So far I’ve helped my team of 5 get on them. Some other teams are starting as well. We’ve got Windows, Linux, and Mac OSX that developers are running on their work machine (for now), and the only container specific issue we ever encounter is port conflicts, which are well documented with easy to change environment variables to control.
The only real caveat right now is we have a bunch of micro services, and so their supporting services (redis, mariadb, etc.) end up running multiple times, so their is some performance loss from that. But they’re all designed to be independent, only talking to each other via their API, so the approach works.
If this is your take your exposure has been pretty limited. While I agree some devs take it to the extreme, Docker is not a cop out. It (and similar containerization platforms) are invaluable tools.
Using devcontainers (Docker containers in the IDE, basically) I’m able to get my team developing in a consistent environment in mere minutes, without needing to bother IT.
Using Docker orchestration I’m able to do a lot in prod, such as automatic scaling, continuous deployment with automated testing, and in worst case near instantaneous reverts to a previously good state.
And that’s just how I use it as a dev.
As self hosting enthusiast I can deploy new OSS projects without stepping through a lengthy install guide listing various obscure requirements, and if I did want to skip the container (which I’ve only done a few things) I can simply read the Dockerfile to figure out what I need to do instead of hoping the install guide covers all the bases.
And if I need to migrate to a new host? A few DNS updates and SCP/rsync later and I’m done.
You know, all this talk about these benefits… when PHP has had this for ages, no BS needed.
I’ll see myself out.
I’ve been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We’re an Ops team though so it’s a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.
…what do you mean by using dev containers? Are your people doing development on their host machine?
Agreed there – it’s good for onboarding devs and ensuring consistent build environment.
Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…
But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.
And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.
So far I’ve helped my team of 5 get on them. Some other teams are starting as well. We’ve got Windows, Linux, and Mac OSX that developers are running on their work machine (for now), and the only container specific issue we ever encounter is port conflicts, which are well documented with easy to change environment variables to control.
The only real caveat right now is we have a bunch of micro services, and so their supporting services (redis, mariadb, etc.) end up running multiple times, so their is some performance loss from that. But they’re all designed to be independent, only talking to each other via their API, so the approach works.