back to listing index

A Proposal for Package Versioning in Go | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: golang go dependency-management
Clipped on: 2018-05-05
Image (Asset 1/2) alt=
Image (Asset 2/2) alt=
How many iterations of this do we have to go through? Go hasn't had mature dependency management since its inception and this constant thrash is really starting to make things difficult.

For all practical purposes dependency management for programming languages is a solved problem, with many open source examples. The only explanation for this constant re-invention that I can come up with (and that's shared among others I know) is that Google doesn't care/need a dependency tool because of their mono repo. Which is rather frustrating since we get features (cough aliases cough) forced down our throats when Google suddenly finds a need for them.


Actually I think the process shows a strength of the Go community. The initial lack of a dependency management tool certainly had to do with the practises at Google - as Google employees, the processes of Google certainly shaped the needs of the Go creators.

But I am very happy, that they did not pick a random management system, just because they "had to have one". We can see enough bad examples in the wild. They left this aspect out of the language standard, to tackle the challenge later, with proper consideration.

As far as I understand all details, Russ Cox has presented a very thoroughly worked out spec, and it seems to solve a lot of problems, other dependency management tools have. It is also completely compatible with Go so far and also fully optional.


>> But I am very happy, that they did not pick a random management system, just because they "had to have one"

But they have, multiple times? First it was just giving people links to godep when people complained in github issues. Then it was a tacit "we like this" about gps and glide. Then it was a pretty-much-official-blessing of dep. Now it's a super-official-now-we-mean-it implementation of vgo. I recognize this as a pattern because I do it all the time, software engineers constantly re-write things from the ground up when they don't care about actually solving the problem and just want to play around with cool new ideas.

As far as the vgo spec, I'm not against it per se... I like most of the general conclusions, but none of it is new. We've had these ideas in the Go world for a while. Most of them have even been implemented! Why can't we just slowly transition an existing tool? Or maybe implement a common library that individual tools can use to solve the problem in their own way? Oh wait we already did that and it got abandoned. Until we see a tool with a stable API that solves the 80% use case and lasts for over a year I'm not going to get excited about the new flashy thing.


> But they have, multiple times? First it was just giving people links to godep when people complained in github issues. Then it was a tacit "we like this" about gps and glide. Then it was a pretty-much-official-blessing of dep. Now it's a super-official-now-we-mean-it implementation of vgo. I recognize this as a pattern because I do it all the time, software engineers constantly re-write things from the ground up when they don't care about actually solving the problem and just want to play around with cool new ideas.

No. Golang hasn't had an official/pretty-much-official dependency management system. The community recommends some tools over others, but that's about it.

With dep it is a bit different - the creators were google engineeers and they wanted it to become the official tool, with integration as a go subcommand etc.pp. - but that was never signed off by Pike or any other lead, only confirmed that it would be a possibility if the tools performs well.

With vgo its similar to dep - a proof of concept to see _if_ it works nicely.

Regarding a library multiple tools can use; Yes, that would be a nice thing - however when designing such a tool _and_ a library at the same time you will get into bigger problems. Generally you'll want a library designed by someone who already solved a given problem in at least one way. That brings enough experience to make the correct architectural decisions early on as well as a good, usable API.

I'm happy that we don't just get half-assed solutions just to have one. I have seen enough projects go to shit because a major step in the project was reconsidered a few times before management put a lock on it because it went through 'enough' revisions.


>"With dep it is a bit different - the creators were google engineeers and they wanted it to become the official tool, with integration as a go subcommand etc.pp. - but that was never signed off by Pike or any other lead, only confirmed that it would be a possibility if the tools performs well."

I'm frustrated. I started using Go during version 0.8 and have been using it since with no dependency mgmt beside my own forks because I hated everything I tried. Then dep comes along, I give it a few months to mature, try it, and it worked great. It solves my common use cases near perfectly.

What this blog post and proposal lacks IMHO is a clear case for why dep isn't "good enough". All I see is handwaving:

"Early on, we talked about Dep simply becoming go dep, serving as the prototype of go command integration. However, the more I examined the details of the Bundler/Cargo/Dep approach and what they would mean for Go, especially built into the go command, a few of the details seemed less and less a good fit."

So "a few of the details seemed less and less a good fit" is a statement that is impossible to refute because it is too scare on any details.


This.

You want feedback now, on this new proposal?

Why bother?

It's patently obvious that our feedback on dep has been totally ignored.

What indication is there anyone will pay attention to any feedback now?

I can't be bothered any more.

I just want one solution to be picked, blessed, implemented and then not changed five minutes/months later.


There's many pages here that explain in detail the problem with the existing solutions. Read all of them. https://research.swtch.com/vgo

If you read the minimal version selection algorithm, it's clear that the proposed solution solves the problem in a novel way.



In my eyes, it would be a warning sign, if the new proposal contained too much completely new ideas. That would mean, that they are not tested in practise. That is why we needed all those project which implemented versioning so far. But getting to an officially blessed, many years supported proposal, starting with a clean sheet, but based on all the experienced lerned, is a good approach.


Welcome to IT as a career.

Having started in the 90s, I have been through these same conversations regarding Adobe, Oracle (oh lawd Oracle) MS, Dell, Apple to a lesser extent.

“Vendor the industry currently relies on heavily is changing things or yanking the rug again?”

The free market works by big players yanking every one else around. Guh.


The big players aren't yanking anyone around. They take responsibility by announcing things like alpha periods, beta periods, release candidates, Long Term Stability branches, etc. And enterprise developers take advantage of these, ensuring that they only code production applications against stable/LTS libraries and APIs.

It's SMB/"indie" developers who should be taking the responsibility here, I think, for relying on these officially-unstable packages in their own production systems, and especially for encouraging others to rely on such. Yes, sometimes, it's the only way to be competitive/have a https://en.wikipedia.org/wiki/Unique_selling_proposition. But that doesn't mean that it isn't their responsibility to cope with these changes. They're taking on that risk-of-change by using those unstable libraries or APIs.

Heck, this is half of the point of software Venture Capital: being able to underwrite the risk of relying on an unstable platform/ecosystem, so that you can have enough runway to recover and still finish your product if "the ground moves under you", and therefore can "safely" rely on these unstable technologies.

(And one not-oft-talked-about property of bootstrapped startups is that they have to underwrite that risk themselves. If a bootstrapped startup isn't just gambling with its founders' money, it has to behave more like an enterprise development shop, building on only stable foundations.)

Theoretically, you could move this underwriting role into the companies themselves, such that the companies themselves would monetarily cushion the blow of changes in their unstable APIs. But they'd need a pretty big interest in the involved consumers to do so. In such a world, you likely wouldn't have separate startup businesses using the BigCorps' unstable APIs; rather, you'd have a set of BigCorp incubators with large shares in most new startups. Maybe that hypothetical world would have higher https://en.wikipedia.org/wiki/Gross_world_product than our own! But I bet there'd be fewer startups, because starting a startup would be a lot more like a regular BigCorp job.


My experience IBM products like websphere tell different story. The were slow and buggy and never slow and solid.


See the post I replied to where the comment was about a big player flailing with dep mgmt to the frustration of those who are having to change course whenever they wave away the previous solution

The evidence is right in your face, but you’d rather explain how “if we all saw it from my perspective...!”

Oh they announce change and deprecation, most times, but ultimately the decision was made in seclusion and foisted upon the public at large

Feels a bit like a time and revenue taxation scheme without representation. Just thinking aloud

Denial isn’t just a river in Egypt


There's no comparing proprietary products with open source projects. The Go project is a beautiful body of work and there's no call for comparing it to proprietary crapware.


>Actually I think the process shows a strength of the Go community. The initial lack of a dependency management tool certainly had to do with the practises at Google - as Google employees, the processes of Google certainly shaped the needs of the Go creators.

Do you think that's a community process? I always have the impression from Go developments that all major decisions are "my way or the highway" affairs of 2-3 of the initial designers. Possibly the same ones writing those vague "exploratory" posts and dismissals of common proposals, that voidlogic captures well in his comment below.


It's far from a solved problem.

Many ecosystems haven't gotten past the "single dependency file and lock file" system where an entire project has to use a single version of each dependency. There are people splitting their project into a thousand "microservices" and making them interoperate over HTTP instead of regular function calls so that each part of their project can have its own dependencies.


Yes, its kinda funny, I mean package managers are probably the ones with the most solid experience in dependency management and when I look at them I can see many different concepts at work there. Some like for example pacman are bold and are not afraid to break a system during upgrade (as it is very unlikely that the upgrade will fail) and others put an abnormal amount of effort to ensure an always functional system at any time (especially on source based distributions, e.g. paludis).

And while I have been using development snapshots for years with my system and didn't have more bugs to cope with compared to other systems, I can understand that developers in corporate contexts need to be able to specify dependencies with versions. So I appreciate that proposal very much and hope it will lead to a better solution for every body involved by providing a way to use versioned dependencies without introducing too much overhead for those who don't need versioned dependencies.


Pacman, apt-get, et al also have something most language specific package managers don't, maintainers. Maintainers back port security fixes and are careful not to break binary compatibility. A random $lang specific package is unlikely to have either, if you expect a bug fix you have to upgrade to the latest version. Developers that don't want to provide this sort of stability typically hate distro package managers.

No package manager can fix a cultural problem.


If you rely on dependencies that are not maintained you are already in trouble. Being the author/maintainer of a package comes with a responsibility. NPM already showed us what happens if you just let people do their thing.


With package managers I worry more about how I distribute my dependencies then anything else. Golang's vendor/ directory concept support was a revelation for me - just ship you dependencies alongside your code.

Meanwhile, I've heard people spend ages complaining that this is totally wrong...while overseeing setting up local artifactory instances and wrangling proxying to do the exact same damn thing.


>Many ecosystems haven't gotten past the "single dependency file and lock file" system where an entire project has to use a single version of each dependency.

Perhaps they shouldn't go past that.


also so that a crash doesn’t stop everything like a BSOD, also so that you can use the language most appropriate for the task at hand because assuming a single language is good for everything is just wrong, also to be able to incrementally migrate things to new PAAS/build systems/etc in the future so you don’t do big bang... and plenty more

multiple library versions is one of the lowest priorities when doing micro services id say

this is not to say that people don’t significantly over use micro services, but there are plenty of hard problems they solve


No kidding. I'm so impressed with most of Go, but this is just silly. I feel like Rust started out with solid dependency management in place, at least it's been there from the early days. I work a bit with Node and Ruby too. Both have solid dependency management in place. I mean there are warts, but generally the problem is solved. Actually, now that I think about it, Yahuda Katz has had a hand in working out dependency management for all of those technologies, Rust, Ruby and Node. Maybe the Go guys should bring in Yahuda to help out. If nothing else, he could probably help them get out of the paralysis of analysis loop they seem to be in here.


Rust is good but not great. Being able to import two major versions of a lib into the same compilation unit (am I understanding this correctly?) is a huge win. Rust can have transitive deps that only differ by version number, but they can't come in contact with each other (don't cross the versions).

Reread the post, the authors make some great, mature realizations that I hope other languages will listen to. I don't use the language, but I use languages that Go has affected and this is a great thing.


> Rust is good but not great. Being able to import two major versions of a lib into the same compilation unit (am I understanding this correctly?) is a huge win.

The one time I needed that I found a pretty good workaround: you make two crates `helper_old` and `helper_new` and have each depend on different versions of that crate. Then I just pub re-exported the crate it depends on.

That way your other crate can now depend on two different versions of that dependency.


This is also the standard approach in the java world, where it's known as "shading."


"Shading" obviosuly comes from the plugin of the same name. But I think of shading to refer more to the practice of repacking classes into a single jar, than the renaming of package name spaces. As far as the standard approach to supporting muliple library versions in one application, i think OSGI is probably the closest thing to a workable solution.


Genius! I trust you will be authoring a new book "RustOps in Practice".


seems like that’d be a decent compiler trick


> import two major versions of a lib into the same compilation unit

When does this situation come up? What is a compilation unit in this case?


It comes up with apis that deal with data storage or IPC concerns.

You might need a handler for both versions if you have customers that use the old api and ones that use the new one. Or different departments of the same customer.

Or if you dodge that bullet but have to write a custom migration tool to get data from the old version and cook it to go into the new one. Half of the code is read only and the other write mostly but they still have to coexist if you do anything more complex than a backup/restore script.


If the newer version of some library has some breaking changes, you can migrate bit by bit instead of doing it all in one go.


If this is the only motivation for multiple dependency versions in the same compilation unit (crate), I'm not convinced. You would be trading off the simplicity of each crate specifying a range of acceptable dependency versions for specifying `N` ranges of acceptable dependencies, and requiring one of each. Much better to enforce one version per compilation unit, and if you want to get complex with your dependencies, then break your project into multiple compilation units.


That needs language support too, some languages simply do not support it. I believe this includes Go due to how linking works. This could be fixed but it would be hard, maybe a 2.0 thing and it does make package management harder. And in a monorepo it doesnt happen so Google doesnt care.


> I feel like Rust started out with solid dependency management in place, at least it's been there from the early days.

Rust had actually been through three failed package manager projects before they decided to bring in the domain experts at Tilde. Nice that at least that bit of churn has been forgotten - but yeah, it became obvious over that time just how non-trivial the problem is. Cargo makes it seem easy though ;)


> How many iterations of this do we have to go through?

Many, I think, based on what the author of dep says (https://sdboyer.io/blog/vgo-and-dep/). I don't get why rsc would just throw away the code and work that was the culmination of years of community work. Isn't rsc largely responsible for creating this problem in the first place? I'm doubtful he's going to fix the dependency problem from scratch. There's a certain pigheadedness to a lot of golang decisions, and I'm afraid this is another one of them...


I definitely see this with Go decisions too, but I think that same "pigheadedness" has also been a driving factor behind a lot of what makes Go so effective to work with. They've thrown out an awful lot of stuff, which sometimes makes me bristle but I usually have to admit they were right.

I think they threw out a bit too much, but that's only a problem if the gaps don't get filled or the compromises aren't adequately explained. What's left is a very pragmatic selection of extremely useful tools that can be used to cut a tonne of working code very, very quickly. I can't say I always like it, but I benefit tremendously from it.


I really like Go overall, especially for systems work. But there's various quirks that hurt my day to day productivity, and I think they only exist due to stubbornness.

For example, why do unused variables and imports, a style issue, cause a compiler error? This is a constant pain if commenting out lines of code during debugging or prototyping. On the other hand, the compiler doesn't care if I ignore or forget to check error values - which is almost always incorrect behavior.


Yeah for sure, it's definitely got its warts. I suspect the import thing was far easier to code as an error than to compute whether they need to be pruned. Gotta love those go build times though! With vim-go and goimports, it's handled automatically for me on save so I don't find that an issue any more. Unused variables, on the other hand...


> For example, why do unused variables and imports, a style issue, cause a compiler error?

Here's why - https://golang.org/doc/faq#unused_variables_and_imports


Sometimes thrashing is good. Consider the pile of crap that is NuGet in the .NET world. Sure there's Paket, but it has nowhere near the adoption/support of the Microsoft blessed NuGet package manager.

At least the Golang team can cherry pick the best ideas & lessons-learned from these community-driven package managers.


I've been hearing this argument for about 4 years now. Pointing out that .NET has terrible dependency management does nothing for me when Google's main languages (Python, Java, C++) all have solved dependency management for their own use cases. If the Golang team has had enough time to cherry pick the best ideas and lessons learned to build a language they've had enough time to do so for dependency management.

At the very least they could have continued their hard core hands off approach of allowing the community to solve it. But instead they half-assed it and started capriciously anointing chosen solutions and honestly we're probably in a worse spot than we were before dep came along.

Plus, lets be honest here, there's no excuse for constantly changing APIs and binaries. If this was truly about getting the best ideas and lessons learned we'd just see refactors of existing tools with slow migrations to new concepts, but instead we've seen multiple complete rewrites, despite multiple efforts to build common libraries that should prevent such events!


>when Google's main languages (Python, Java, C++) all have solved dependency management for their own use cases.

Huh? Of those three, I would only consider Java "solved", and both Maven & Ant are far from the panacea of package management.

virtualenv is a community project (much like dep), is pretty new in the grand scheme of things, and isn't completely standardized (some projects use tox, some list out requirements.txt (rarely version pinned), and others vendor.

As for C++, almost everyone does something a bit different so I don't see how that is at all relatable.


> virtualenv is a community project (much like dep)

virtualenv was a community project but based on that venv was created and has been part of the official Python distribution since Python 3.3, released in 2012.

https://docs.python.org/3/library/venv.html


>virtualenv is a community project (much like dep)

It doesn't stop there either I've seen python projects using nothing, virtualenv, venv, pip-tools, and pipenv.

I finally feel like pipenv is the true solution and it seems very new. All that said and Python is quite old relative to Go.


Python is slowly catching up. For example in JS/Elm/Rust you have a clean approach with:

* a central package registry

* a file to describe dependencies

* a command line tool to install, build and publish packages

In Python things are more fragmented. You have:

* a central package registry

* a file to store some of your dependencies (Pipfile)

* a command line tool to install packages (Pipenv)

* a file called setup.py in which you need to specify your dependencies again (in another format). You can execute this file to build/publish packages.

It think it would allow for a much nicer user experience if pipenv/pipfile would handle packaging/building/publishing as well..


With all the bad press npm has had, I would not put it in the clean approach bag.


For java, I would look at gradle, not maven.

Of course, gradle hooks into the whole maven ecosystem but that is one of its advantages. Everybody in Java land understands Maven packages but how you generate doesn't matter.


Gradle degraded the dependency experience significantly from maven. Maven had a deterministic, sensible (and selectable) way of dealing with conflicts - gradle just defaults to "largest version wins". Similarly, gradle chucked the whole idea of being able to manage common sets of dependencies across multiple projects easily (the maven super pom). I mean, it's all programmable in gradle, but it pretty much ignored the excellent work maven had done except to use its repos.


Android builds have renewed by love for Maven.

If it wasn't for Android, I wouldn't bother 1 second to learn Gradle.

But I guess Groovy needs some project to keep it alive, now that no one remebers the days JSF beans would be written in Groovy or JUGs were holding Grails talks month after month.


Don’t worry, Gradle is moving to kotlin for build scripts already. Groovy will soon be a (failed) thing of the past.


Seems like every other site is converting its Gradle builds into Bazel.


Python has several widely-used dependency management systems.

C++ has no widely-used dependency management systems.

Both languages are doing fine.


Because of the lack of dependency management C++ suffers from "mega dependencies" that include the entire kitchen sink and more.


Which C++ projects have "mega dependencies"? There is no technical reason to have those (or any other kind of dependency management problems, for that matter). I'm not sure what it's like on Windows, but most other OSes support all of the major C++ dependencies in their default package managers (or things like MacPorts on Mac OS) and it's not difficult to install anything else manually.


I think he/she means something like boost or Qt.

On Windows, Microsoft is investing on vcpkg as package manager.

https://github.com/Microsoft/vcpkg


Interestingly, Debian/Ubuntu has standalone packages for most of the boost libs that need to be compiled.


You don't think Python also has thrash? I'm a huge fan of pyenv as it brings in manifest+lockfile+segregated install paths. But those three things are relatively new to Python and very new to get unified.


environment namespacing != dependency management

Regardless of your setup at the end of the day you're using pip to solve your dependency graph, and have been for over a decade.


I mistyped. I meant pipenv (https://docs.pipenv.org/) not pyenv. Pipenv does use pip but is a significant, recent upgrade to package management and is now the recommended tool.


There is only 24 hours in a day, and the Go team has limited resources. They focused on other issues in the last few years. Dependency management is the current focus now.


> If the Golang team has had enough time to cherry pick the best ideas and lessons learned to build a language

Except where have they done that? They've had 40 years and their solution to the C binary function error code return problem was... to add a special way to return an error code. There have been superior solutions to this for 20 years at least.

How long have we known about the value of generics?

To me, the fact that they can't figure out how to solve this isn't remotely suprising.


> If the Golang team has had enough time to cherry pick the best ideas and lessons learned to build a language they've had enough time to do so for dependency management.

Most of Go's features are those that proved their value over 40 years. There are no dependency management schemes that have that kind of reputation. Good ideas take time and misfeatures are expensive.


CSP the central premise of golangs concurrency model doesn’t meet that bar.

If they are willing to throw out that requirement for something as fundamental why should dependency management be held to so high a standard.


Not sure what you're referring to. CSP turns 40 this year, and it's not the central premise of Go's concurrency. Whatever the original intent, very little Go is written in true CSP fashion. Of course, even if I were wrong about these points (and I'm not), it wouldn't invalidate my original point as you suggest: just because most of Go's features are very old doesn't mean Go is forbidden from making an innovative move or two--for example, its scheduler is unprecedented (at least as far as I'm aware).


Something being formulated exactly 40 years ago in academia doesn’t count as proving its value over 40 years especially given that broad support of the concept is new being supported by golang and at best the marginally popular clojure. The golang faq specifically says that csp is the basis of its concurrency.

Combine those 2 facts & I think it’s fair to say golang is willing to base things (central ones even) on untried ideas, which directly contradicts your stated position which was precisely that no dependency management scheme lives up to the precedent of the other accepted features of the language.

This is trivially proven incorrect via looking at other language dependency management schemes that have much more broadly proven their worth.


> doesn’t count as proving its value over 40 years

The Go designers used CSP over the course of many decades via Newsqueak, Alef, Limbo and Plan 9 C's libthread.


As only a go user & not a contributer, that describes more the paradigm I’ve observed for how golang chooses the features to add.

Did the principles enjoy a feature while using it on Plan 9? Likely to be included. A feature having broad industry or academic backing, or a long history? Immaterial.


Newsqueak, Alef, Limbo and Plan 9 C's failed spectacularly in the market, nothing to brag about as proven technologies.

In fact, as nice as Go might be, if it had the AT&T Research Labs stamp instead of Google's, it would been just as successful.


> Combine those 2 facts & I think it’s fair to say golang is willing to base things (central ones even) on untried ideas, which directly contradicts your stated position

It absolutely doesn't contradict my stated position. I was clear about that in my previous post.

CSP inspired Go's features (in that sense alone it is "the basis for Go's concurrency" as alluded to by the Go FAQ) , but as I mentioned, it's not integral to Go in any way, and very few programs are modeled in CSP fashion.


What are some of your specific complaints about NuGet? I've never disliked it that much. For the most part it has "just worked" for me.


In large projects, NuGet takes FOREVER to resolve package dependencies - I mean 4+ minutes on a new i7 Thinkpad w/16gb RAM/1TB ssd.

Heaven-forbid you want to upgrade a package that exists in all 130 projects in the solution (not my call to have that many projects) - you may as well take a long lunch. I will try to make that the last task of the day so I can let it run for as long as it needs to.

The VS UI for NuGet is terribly buggy.


Uauu! 4+ minutes.

Try to build a C++ or Android project and then go for lunch instead.

NuGET is great.


That's 4 minutes to resolve one NuGet package's dependencies - not 4 minutes for a build.


Really? I never had it take that long.


Usually it doesn't take that long - but it is still generally, unacceptably slow IMO.) Last week I was working on fixing duplicate (version mismatches) package references in the our csproj files) in the branch/solution and I couldn't believe that took 4 minutes to resolve a single reference (after fixing the issue in this particular project!)


I see, I guess I have been lucky then.


NuGet / VS has no qualms with having two references to the same package, but different versions in the project file (i.e. Newtonson.Json.dll 9.0 and 10.0) The build will likely fail though and you'll get no visual feedback that there are two refs in the VS NuGet UI. How did we we get in that state to begin with? I suspect through bad project file merges (or possibly NuGet UI bugs, can't say for sure.)


> How did we we get in that state to begin with? I suspect through bad project file merges (or possibly NuGet UI bugs, can't say for sure.)

Most likely someone modified the packages for an individual project and not the solution as a whole. Always manage packages at the solution level and your much less likely to have these issues, unless you have multiple solutions...


That sounds great when you have a small team, with 30+ devs it's hard to police.


- no local dependencies^ - 'unable to resolve x, arbitrarily picked 1.4.3' - no lock file - nuspec file configuration hell - nuget v2 and v3 feeds arbitrarily going down - installing a specific nuget version on anything other than windows.

Basically, it 'just works' for simple senarios, its just not very good for anything else.

If you want a comprehensive guide to why its not awesome look at the 'packet' website, they cover the issues quite clearly.

^ you can actually use local dependencies, but its irritating and poorly supported (still uses the global system cache, forcing a cache flush to update).


When you install a package, nuget will install old dependencies of that package.


> Go hasn't had mature dependency management since its inception and this constant thrash is really starting to make things difficult.

What constant thrash? In the 6 years since Go 1.0 the only dependency management related change I'm aware of is the addition of `vendor/`.

Update: sounds like "thrash" refers to the variety of community vendoring tools. I guess of the few I've used they all felt fairly similar/interchangeable, so it didn't feel like thrashing changes.


The constant thrash caused by the lack of a standard package management tool—even a de facto standard. Just look at the various package managers that have come and gone over the years:

1. https://github.com/tools/godep

2. https://github.com/kardianos/govendor

3. https://github.com/robfig/glock

4. https://github.com/rogpeppe/govers

5. https://github.com/Masterminds/glide

6. https://github.com/golang/dep

I'm sure I'm forgetting a few. And just when it seemed the Go team might be ready to standardize around Dep (6), they threw this wrench in the works.

I'm of two minds about vgo. I think it has some interesting ideas, but Dep works today, is widely adopted, and has nearly all the features you'd expect from a modern package manager.


> The constant thrash caused by the lack of a standard package management tool

There is a standard, and has been one from the early days. The catch is that the standard is not loved by all, so others have created competing package management systems to fit their own needs.


What is it?


go get.


Does that finally work with private repos?


It always has. You have to edit your gitconfig. On mobile, so I can't paste mine. A Google search should get you there. It did for me like 5 years ago.


So having to reset my git config, either globally impacting everything I do, or manually for every project I use go get in...

Got it.


Between "go get", Godep, Govendor, Glide, gb, Dep, and this — and I'm actually omitting a bunch of other, less popular utilities [1] — there's certainly been churn.

There's definitely a feeling among Go developers that this problem should have been solved much earlier, and that the Go team's years-long refusal to address it caused the proliferation of mediocre tools (looking at you, Glide) that in many ways made the whole situation worse.

When Dep came along, a lot of people breathed a sigh a of relief, because we finally have something okay, and we can go back to being productive instead of fighting dependency management problems all day.

[1] https://github.com/golang/go/wiki/PackageManagementTools


> In the 6 years since Go 1.0 the only dependency management related change I'm aware of is the addition of `vendor/`.

This. Sure, new tools have appeared, but there's literally nothing wrong if you decide to just stick with godep or whatever you happen to be using.


There have been many packages for dependency management, which is not "official" thrash, but thrash nonetheless. Granted, mostly it's been thrash over the last few years. Especially bad since `go dep` was hailed as the king, and then recently abandoned entirely.


The blog post mentions some of this. It claims " For a long time, we believed that the problem of package versioning would be best solved by an add-on tool, and we encouraged people to create one. The Go community created many tools with different approaches"

These many tools were the thrash, including godep, gb, glide, dep, and more.

Each of these expressed versions with different manifests, many of them could import other format's, but not all. All-in-all it has been a mess.

Sure, the go team has not officially done much, but that's because their stance has been borderline negligent on the topic.


I agree that the thrashing is a nuisance, but the 'reinvention' has nothing to do with Google except that Go didn't ship with a package manager by default because it wasn't useful to Google for the monorepo reason you cited. Regarding "features forced down our throats", I think that has only happened with aliases, and the community pushed back to the effect that the feature addition was halted until the community had time to properly review it. Mostly I think that was a one-off problem.

As for "it's a solved problem", is it? Until perhaps the last year, pip, npm, and most other package managers have been plagued with serious issues. I'm not fond of the situation in Go, but I think it's not unreasonable to try something different and less binding than a package manager until a solution emerges which solves the problems faced by other package managers.


+1

I actually also liked dep with the vendoring approach, it is just better to ci a repo with vendored dependencies (no more Broken Building because of Internet/Proxy or github Problems). And in Go it was/is Even simple to upgrade them. Sadly go is slowly moving away from that Part.

the new approach is Bad, because they still pull Sources from Github, this Makes their approach unsuitable.

E: damn iPhone autocorrect


> Dep was released in January 2017.

I know, Go is not a very old language, but all the Go projects I was involved in were started before that and I haven't written any Go in like 5 months, but still - this is too new for all the Go developers to have looked at it in depth (I certainly didn't come around to try it) and yet there's something new here.

We're all joking about a new JavaScript framework being released every week, but dependency management in Go feels a little alike.

I'm not a huge fan of Go anymore after my last experience (writing some web stuff, with CRUD) after having been a huge fan (writing monitoring checks and non-HTTP daemons), and I have no immediate plans - so I've no stake here, but I can really only hope this annoying part (which least-bad dependency management system will I use?) is finally over.


We've been using Dep for about 6 months. It's definitely production ready. The interface may change a bit in the future, and of course now Dep may be scrapped altogether, but right now it's very usable. We haven't encountered any bugs.


Agreed. Dep had its warts (20 minutes for a very simple dep ensure...), but it solved the 80% use case in a way that was least objectionable for the majority. The fact that they can't (won't?) even refactor dep to suit their new ideas or try and revitalize the gps project just tells me that they don't really care about solving this for end users and just want to have fun writing a fancy dependency graph solver. Which is cool and all, but at least give me a stable API first.


> just want to have fun writing a fancy dependency graph solver

There is no graph solver, that's the whole point.


Other package managers, including the ones mentioned in the OP, don't have SAT solvers either. This isn't a new feature of dependency resolution algorithms.


The proposal explicitly allows for a "proxy" so you don't have to pull from GitHub et al:

> Define a URL schema for fetching Go modules from proxies, used both for installing modules using custom domain names and also when the $GOPROXY environment variable is set. The latter allows companies and individuals to send all module download requests through a proxy for security, availability, or other reasons.


The major problem I have with the proxy is that now I have yet another system to keep track of, update, secure, backup and maintain.

At least with vendoring I keep all of my code pinned in one portable repo and all I really need is dep and git. If I clone or backup the repo I don't have to think about cloning the state of the proxy also to get my code to compile.

I don't see that any serious business would be very enthusiastic about adding a code proxy as yet another cog in their development pipeline. One of the main appeals of Go is that it reduces many pain points of a development environment As a counterexample see e.g. the insane requirements of the Javascript web development ecosystem.

I also have some reservations about getting developers to adhere to a versioning repo layout standard - that /path/to/v2/ proposal and semantic versioning - absent any automated tools to enforce it (see cats, herding of). How many of the many Go github repos follow the recommended cmd/pkg code layout [0]? Not many. My cynical notion is that anything that's not automated out of the box is going to quickly run into the inevitable proclivity of humans to be messy.

Having said all this, I do think the discussion is worthwhile. My hope is that rather then completely switching dependency management systems the discussion identifies the things which are still painful with dep and fixes them. Let's face it, the vast majority of projects out there really don't need anything more complicated then dep, so I would hate to see it abandoned.

[0] https://github.com/golang-standards/project-layout


You don't have to "pull sources from GitHub". You can host your code as a .zip file on any static web server. Look at under "Download Protocol" in this article: https://research.swtch.com/vgo-module

About vendoring, I think Russ Cox is aware of the issue. Look at this discussion on his website: https://research.swtch.com/vgo-module#comment-3771676619


I haven't read the spec, only the blogpost from last month, but afair nothing in vgo prohibits the use of a cache in front of GitHub that makes CI independent of GitHub.com availability (unless, of course, you're bumping your deps).


You're right, nothing in the spec prohibits "vendoring" the .zip modules in a directory in your project. It's purely an implementation issue. vgo already provides GOPROXY, but a built-in "vendoring" mechanism would be easier for most people. Here is one I commented on Russ' blog: https://research.swtch.com/vgo-module#comment-3772245118


I thought they were adding support for vendoring? Was it abandoned again?


I think GP is referring to the fact that dep may not remain the blessed solution for long; recently, Russ Cox posted a number of articles suggesting looking into alternatives https://research.swtch.com/vgo


> For all practical purposes dependency management for programming languages is a solved problem

Hey Phil! It's been a long time. C'mon man, when we worked together, are you telling me you never ran into the never-resolving dependency map with berkshelf and chef? I know I did many times. Locks and a manifest did not solve that. It sounds like the proposed solution should prevent that.


> For all practical purposes dependency management for programming languages is a solved problem

It's obviously not solved satisfactory.


Yes, it is. Everything can be improved, but I see very few complaints with Cargo, and, most importantly, not complaints that would be solved with minimal version selection. (Most complaints I see are about the lack of package namespacing.)


One of the ways I'm hopeful that cargo can improve in this area is through the work being done on rust-semverver. If crates.io can either determine semver compliance or, perhaps, deny releases that are not properly semantically versioned, it would give cargo more license to select a single version to satisfy diamond dependency graphs. Those diamond dependency graphs are one of the ways that package management is famously very much unsolved.


I agree that Cargo is awesome and I would be happy to see Go follow suit, but I interpret "package management is solved" to mean "the industry has standardized on a single scheme". There does seem to be convergence in this space, but it's a relatively new phenomenon.


Yes, the industry has absolutely standardized on a single scheme: manifest plus lock as separate declarations, SAT solver, single dependency version per compilation unit.


That may be what things are converging on but in my opinion, any system that imposes a "single dependency version per compilation unit" policy is not a "real" dependency management solution. That approach creates a tremendous amount of friction because it pretty much ensures dependency hell if there are transitive dependencies with separate release cycles (and there very much should be able to be).

Edit: I just realized that we may have a different definition of "compilation unit". If you mean "single program" then my point stands. However, if you just meant some sectioned-off part of the program, then I think the drawbacks of that are far less. Basically, I think that it is useful to be able to have access to multiple versions of a datatype (e.g. if you want to write a converter from old -> new) but that use case is far more fringe.


As a sysadmin I object to the notion that old to new conversions are a "fringe" use-case. Pretty much all my coding (I'm a "devops" sysadmin-type) is exclusively "I need to wrangle all versions of thing 1 into a type of thing 2 or between several versions of thing 1".

If backwards compatibility is important, then letting me use multiple versions side-by-side in the same modules easily is damn important.


To be clear, Cargo fits the latter definition; you can't depend directly on two different versions of a package, but you can transitively.


If that's the case, it's a pretty recent development. Python and JS didn't have this until very recently. Yeah, Go is a bit late to the party, but hardly worth the dramatic criticism in this thread.


No, it's hardly a standard. It's just a common approach. The article presents another approach which differs on the three points you mention: manifest plus lock as separate declarations, SAT solver, single dependency version per compilation unit. And I think what Russ proposes is a really interesting tradeoff in the design space.


It is the _defacto_ standard and has been for many years. It is what's expected by the absolute and vast majority of software developers.


> I see very few complaints

If you live in a filter bubble (survivorship bias).


Only because the golang core team seem to insist on reinventing every wheel either without considering prior art, or at least without considering why prior art went with a particular approach.

I swear I should put together a list of these instances.


> Only because the golang core team seem to insist on reinventing every wheel either without considering prior art, or at least without considering why prior art went with a particular approach.

The popular criticism is that Go isn't reinventing enough; that it's stuck in the 70's and it's not sufficiently innovative.


I think it's two sides of the same coin.

I've heard go described as a language that doesn't try to solve any hard problems. Let's set aside for a moment whether or not that's a positive or a negative. The bigger problem is that they also seem to completely ignore everyone else's existing solutions to hard problems too, which leads to the criticism that it's stuck in the 70's.

Sure, a lot of the existing solutions to some of these problems aren't perfect or involve tradeoffs that don't work for everyone. In general though, more recent solutions have built upon the lessons of previous approaches and now we've got languages and tooling that get a hell of a lot right, straight out of the box (I'm looking at you Rust / Cargo).

But go isn't doing this. They appear to be trying to come up with de novo solutions to problems from the perspective of the 70's. Whereas everyone else is "standing on the shoulders of giants", they insist on discarding the lessons we've learned over the past fifty years and running one by one into all of the same problems that led us to the solutions we have today. As an observer from the sidelines it's painful to watch them relearn these same lessons the hard way.


I largely agree, and still for everything that Rust and company get right and for everything that Go gets wrong and for how much optimism I approach Rust and company with, I'm still far more productive in Go than any other language (even the ones with which I have much more experience). The only way I can explain this is that programming language design is something of an art form, and the integration of features and tools is far more important than the set of features and tools.


Rust current package manager is third attempt which worked. Async saga also is approaching 3rd attempt before we have it working. For module system there was big backlash on simplifying it for new users. so effect will be more likely that is understood by few developers who have spent significant time.


That's a really good perspective. Thanks for sharing.


I think that's an inaccurate/misleading framing: the typical criticism is that it has seemed to ignore innovations from elsewhere, there's no suggestion that it needs to reinvent them. For instance, people consider generics a solved problem as there's numerous schemes invented by other languages, and critics think one of them should be able to work for Go.


Yes, but Go has specificities that make difficult to directly use the scheme used by other languages: Go is AOT compiled (we can't use C# generics), Go favors compilation speed (we can't use C++ template), Go lets the developer control memory layout with arrays and structs (we can't use OCaml generics), etc. Of course, Go should get inspiration from other languages, but adding generics is not easy as simply transposing another language implementation. There are multiple detailed proposals in the bug tracker that have failed for multiple reasons.


There are also CLU, Ada, Modula-2, Modula-3, Eiffel, Sather, BETA as possible inspirations for implementing generics, all the way back to 1976 (CLU).

As for C#, people keep forgetting .NET always had a JIT/AOT compiler since version 1.0.

Just because not everyone bothers to sign their applications and call NGEN at installation time, it doesn't mean it isn't there.

Also Mono always supported AOT compilation, it is the way Xamarin works on iOS and the deployment story on Windows Store since it was introduced on Windows 8.


I remember Mono with AOT compilation had some limitations regarding generics in the past, but maybe this has been solved since then.


Yes Mono has some limitations, but it isn't 100% like the iOS Xamarin compiler.

http://www.mono-project.com/docs/advanced/aot/

In any case, a limited form of generics is better than not having any at all.

They don't need to provide a turing complete implementation of generics, even a basic one like CLU had would already be an improvement.

Using go generate feels like the old days, writing generic code in C and C++ with pre-processor tricks.


I agree. I hope the Go team will tackle this issue, when they are done with dependency management.


Swift has essentially all the same constraints, and fretting about compilation speed with the C++ approach are hard to stomach given one of the widely-used replacement is equivalent: manually copy-pasting code, instead of letting the compiler do it.

In any case, the truth of the complaint about generics isn't actually relevant here: the rebuttal is to "people complain about Go not reinventing enough", and they don't, as they (truthfully or otherwise) think it is ignoring already researched and widely-implemented solutions to the problem and complain about this (that is, the complaint is "uninventing", not "not enough reinventing").


I didn't mention Swift and I should have. I agree it's close enough to Go to provide useful inspiration.

But even Swift implementation of generics has its issues and tradeoffs. Look for example at this thread about "Compile-time generic specialization" in Swift: https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...

Repeating that Go ignores "already researched and widely-implemented solutions" is getting old, really. Would you say that OCaml doesn't have shared memory multicore parallelism because it ignores already researched and widely-implemented solutions??? Would you say that Haskell garbage collector causes long runtime pauses because it ignores already researched and widely-implemented solutions??? The truth is that solving that kind of things is a lot more complex than just copy-paste the code from another language.


I'm... intimately familiar with generics in Swift. :) Like all systems it has trade-offs, but the decision it makes line up with the decisions you said Go wanted to also make.

In any case, that specific concern doesn't apply to Go, since it doesn't have overloading.

> Repeating that Go ignores "already researched and widely-implemented solutions" is getting old, really

Sure, but that still isn't the point of this discussion: it's not what people should be doing, or the underlying truth, it's what people are doing in practice and they are complaining about the Go team seemingly ignoring the past 40 years of programming language development, not complaining that they're not inventive enough.


You're right about the issue I mentioned which is irrelevant since there is no overloading in Go. Thanks for pointing it out.

I didn't know you worked on Rust and now on Swift :-)

What strategy does Swift use to compile a generic function or method? Does it generate only one polymorphic version of the code, or multiple specialized versions for each expected parameter types (often called code specialization or monomorphization)?


One polymorphic version, like OCaml or Haskell, not specializing like C++ or Rust.

The compiler can and does specialize functions as an optimization, but that's not necessary nor part of the semantic model.


I always thought that generating one polymorphic version would be a better fit for Go than generating multiple specialized versions.

It's interesting that Swift adopted this approach and confirms what you said earlier about Swift having essentially the same constraints as Go and being a good source of inspiration of Go.

What's the status of Swift on Linux, to write web services and connect to databases?


There's lots of testing for Swift on Linux (e.g. every pull request runs tests on macOS and Linux), and Apple recently announced https://github.com/apple/swift-nio which seems to have been very well received, e.g. the Vapor framework is completely switching to it for Vapor 3.


Thanks for the link. Does Swift provide something similar to goroutines?


>OCaml doesn't have shared memory multicore parallelism because it ignores already researched and widely-implemented solutions?

OCaml people definitely not ignore them, they are carefully investigating existing code and elaborating modern and efficient solution [1](unlike Go, which is too opinionated).

[1] https://github.com/ocamllabs/ocaml-multicore/wiki/Memory-mod...


I'm aware of the ongoing work to enable multicore parallelism in OCaml. It's the reason why I mentioned OCaml in my comment. My comment was rhetoric. My apologies if it was not clear.

That said, I don't see the link between the work done on OCaml multicore parallelism and Go "being too opinionated"...

If I follow you, Go people, unlike OCaml people, are "too opinionated", and don't "carefully investigate existing code" and don't "elaborate modern and efficient code". Do you realize how arrogant (and a bit ridiculous) this sounds?


Is Swift scheme same as Rust which is same as Java which is similar to C++? If not then it seems to me language developers decided to use an approach what suited their purpose.


No, they're all different, which is my point: there's a variety of schemes that all have well-understood pluses and minus, meaning new languages don't have to do the exploration themselves.


> The popular criticism is that Go isn't reinventing enough; that it's stuck in the 70's and it's not sufficiently innovative.

That's popular, but unfounded, in my opinion.

I find ironic that some HN commenters are criticizing the Go team for not being stuck in the past when they are precisely trying to innovate and questioning the status quo on package management.


> Only because the golang core team seem to insist on reinventing every wheel either without considering prior art, or at least without considering why prior art went with a particular approach.

Actually it's pretty clear that they did their homework.


If you count as “their homework” the dozens of iterations of this we’ve been through.

First, it was literally argued that just `go get` just grabbing `master` of all of your dependencies was good enough, because if you ever need to release something backwards-incompatible, you should create a new repo. I wish I were making this up.

Then we just vendored everything and never updated our dependencies, and that too was good enough.

Then GoPM, go dep, and a deluge of other tools that I don’t care to track down or sort into their appropriate order.

And now this, which decides to buck the trends that everyone else follows yet again and attempts to satisfy dependency trees by choosing the minimal compatible versions instead of the maximal ones, with the completely foreseeable consequences that:

  * security patches won’t be applied
  * issue trackers will be filled with bug reports that are closed with messages saying “this was fixed months ago, update”, and
  * updating dependencies will become *harder*, since this is a straightforward result of doing it less frequently


If you can't go get -u all on your project at least you have done something 'wrong' or your upstreams have done something wrong.

First, the upstream behavior; not supporting the published interface (as it was and should therefore work forever) means they need to deprecate that interface and release another with the incompatible changes. However the PROPER way of handling this is to add new versions of exposed public methods and interfaces / constants with new names and only if such a major change is required that you can't support the old interfaces, at THAT point, release at least under a new inclusion path if not new dependency name.

Things local a developer might have done wrong: A) include an 'immature' upstream repository that behaves like above. B) use non-public components or other unsafe practices to go beyond the interface contract that was exposed.


I guess using any dependency, directly or transitively, that declares itself pre-1.0 in the go or semver meaning of the word, is thus right out.

At that point, almost all of the go ecosystem can't be used.


Do you have proof that the Go team has not looked at any other dependency management approaches or considered ones from other languages?


Obviously I don’t, but the available evidence — to me, anyway — points to this being a cultural reality in the golang ecosystem.

I left the language years ago, but the steady stream of articles of the form “Why go doesn’t need $x” followed by innumerable comments, articles, and blog posts by people struggling with the lack of $x, followed by the go core team (due to the completely foreseeable consequences of not having $x) begrudgingly adopting $y, which is supposed to (but doesn’t actually) obviate the need for $x… I’ve reached my own conclusions, and I’m finding more and more people who are leaving the ecosystem with those same conclusions.

I believe that Go excellently solves problems that Google has. I don’t believe it solves many problems that most other users have, even though it might seem like it at the outset.


Wait until they finally cave in to generics being one of the top requests on Go surveys and bring something to Go following that same design process.


What are the "benefits" of this approach?

To re-iterate a different post of mine against this question (implicitly, what are the benefits of NOT having library versions):

    * The only version to test against is HEAD
    * No fossilization of security or stability issues
    * Public libraries must support all Uppercased (exposed) declarations; that is the library interface.
    * If the build breaks in an odd way: update go, then: go get -u all
    * Simplicity, there's only one supported version and only one version to go get and develop against.
    * Also, why would building against an old version _ever_ be necessary?


Have you read the post? It provides answers to your claim that "dependency management is a solved problem".


> The only explanation for this constant re-invention that I can come up with (and that's shared among others I know) is that Google doesn't care/need a dependency tool because of their mono repo.

That's a pretty good argument for using a monorepo, don't you think? A single organisation's code should all be in a single repo, which means that a single version of a dependency is used within the organisation, and that updating to a new version of that dependency is a single, self-contained project.


We use a monorepo at our place of work and it’s great, so far. We also use govendor to manage external dependencies in the monorepo. We are rigorous about keeping packages up to date so have never had to run v1 and v2 of a package at the same time.


> forced down our throats when Google suddenly finds a need for them

You can say that about Go itself. And there are already other/better alternatives. So why use Go?


A language is valued by its programmer bench, tooling, available libraries, as well as the language itself. Honest question - what is better than go right now?


You'd have to specify the use case of interest. I don't think there's anything that uniformly dominates Go, but that's terribly interesting since there are so many dimensions of interest for a programming language that there aren't any languages in use that uniformly dominate some other language in use. (You can get uniform domination if you include unused language or esolangs, but who cares.) But there are certainly many use cases for which Go is not the best choice, not in the top 3-5, or straight-up not a viable choice.


C, c++, c#, Java, Python, Lua, lisp, Ada...

Go has its good sides, but it's just a tool. It's not the best, because tools aren't supposed to be the best. Tools are supposed to be useful for what they're made for.


I have a hard time with using Java to write micro-services. Even though lots of packages, such as Undertow, support non blocking I/O, a lot of the rest of the stack and the programmers that use it don’t know how to do NIO. What happens then is that the SRE/Ops team needs to configure large thread pools which consume lots of RAM just because multiplexing is I/O so difficult.

In comparison I can just start goroutines and Elixir processes for fast and simple I/O parallelism almost without thinking about it.


But it has had mature dependency management tools for a long time. They were similar to cargo and npm.


"Go hasn't had mature dependency management since its inception"

go dep is perfectly fine for most use case.


Agreed, but compared to Go itself, dep is incredibly recent, still in Alpha, not widely used yet and only the last in a series of incredibly many homegrown approaches that created lots of chaos and churn. And when everybody thought the rollercoaster was finally over, vgo came around the corner. It’s a great proposal, no doubt, but it‘s fair to say dependency management and generics were the two largest unsolved daily encountered problems with Go in real life. Let‘s just hope the community gets an Epiphany about the latter one as well soon...


I'm ok for now putting everything in one GOPATH as if I were google.

> How many iterations of this do we have to go through?

Fewer iterations than Mr. Cox has gone through. Isn't it great?


I'm excited to see Go trying something new here. Something just feels clunky about the current "manifest+lockfile" approach. There are a couple of ideas in vgo that I really like, e.g. "incompatible versions must have different import paths." I think rsc is justified when he compares these conventions to upper-casing exported identifiers in Go packages: for a while it seemed weird, but now it feels natural and obvious. In other words, Go has a history of breaking from conventional approaches, with surprisingly consistent positive outcomes. Some bikeshedding is inevitable if this proposal is accepted, but I hope that the main ideas are preserved.


> I'm excited to see Go trying something new here.

"Semantic import versioning" is equivalent to versioned APIs in the web world, no? Though I have never seen anyone utilize the same concepts for language-level packages; it certainly maps well!


For the record Roy Fielding is against using version numbers in URLs: https://www.infoq.com/articles/roy-fielding-on-versioning/


> For the record Roy Fielding is against using version numbers in URLs: https://www.infoq.com/articles/roy-fielding-on-versioning/

If Roy Fielding wanted developers to fully grasp whatever he was talking about with REST, he should have written a normative spec, not a dissertation, he didn't.


Roy Fielding isn't a deity. The version would be encoded in the 'Accepts' content type header. There is little distinction, one is in-band (in the path) and one is out-of-band (in the header), but the information is still represented.


I don’t think you have this right. First, both the URL and the headers are in-band. Out-of-band would be you knowing that the version is v2.3.7, because you just know. In-band is anything in the request and response – headers and all – anything else is an assumption and as such out-of-band.

Moreover, if I understand Roy’s argument correctly, he’s arguing against versioning URLs, because they are meant to be opaque identifiers that doesn’t convey any universal meaning. If you know the second segment of a URL path to be a version, it’s because you have out-of-band knowledge that it is, not because there’s anything in the request to say it is.

If however you have a header that tells what version you wish then that is in-band information that allows you to negotiate the content appropriately with the server. It also means you don’t have to change URLs whenever you make a change to your API, breaking or not. Of course the server and client needs to understand this header (to my knowledge there’s Accept-Version header) and if it’s non-standard then it can be argued that it’s just as bad as versioned URLs. Perhaps, but unless you get very granular with your versions (and most don’t, let’s just be honest) your still at the mercy of what the server chooses to present you at any given time. In fact, REST gives you no guarantees about what you’ll receive at any given point – you may just as well be given a picture of a cat. REST says you should be able to gracefully deal with this. Most clients (that aren’t web browsers or curl) don’t.


No! Headers are much worse. There is nothing in an API response that tells you in a standard way how to set headers, but there is something that tells you how to follow links (URLs). Accept-version is a non standard header, not defined in the http spec so definitely that is not restful - how do you know if the site supports it or what the version numbers are? You have to hard code in the client!


Arguable. At least an unknown header is likely to just be ignored, whereas a URL that’s changed to update the version will at best cause 404s, unless you keep the old URLs around. If those URLs are still recognized, but routed to the later version anyway then that’s just as bad as using an unrecognized (and likely ignored) header.

In any case, what you’re saying is the point I was trying to make in the latter part of my post. However a typo (or rather, a missing word: “no”) unfortunately changed my meaning entirely. :o(

It should’ve read:

> (to my knowledge there’s no Accept-Version header)

Oops. My apologies for the confusion.


> I don’t think you have this right.

That is probably true, I don't disagree with anything you are saying. To me the most in-band representation is everything passed in the URL with no user control over headers.


Using "in-band" here doesn't really follow from a technical understanding of the term; the TCP connection is the channel, and anything sent over it (such as an entire HTTP request/response, including headers) is "in-band".

Out-of-band would be a phone call, or perhaps an email - an entirely alternate method of communication.

Transparency to a user (via the client, i.e. browser) isn't really relevant, from a communications standpoint, to whether or not data is considered "out-of-band". Given the subject (APIs), you're not likely to be browsing to these anyway.


Well, I think from Roy's point of view REST is only HATEOAS. So URL's are irrelevant; you can put version numbers in there if you want but no client should realize it.

What you should be versioning is your media types (e.g. HTML4 vs HTML5).


He is not a deity, of course, he's "just" the one that invented REST. I guess what you mean is that he's not right about everything he says.

Well, what he exactly says is that URLs should not include versioning, because URLs are the interfaces names and REST estates that interface names should not be versioned, as that implies a breaking change in the API. It's just the wrong place for versioning in the REST way.

But he is not against versioning, as you say, you can use the Accept header. You could also use a custom header if you might, but the canonical way would be the Accept header.


I believe that Roy is nerd sniping us, I capitulate. Ok, Roy was wrong, versions in the URL don't break things, they make them stronger. Naming is hard, we republish a semantically related by different artifact and change the name rather than the varying part? Use etags instead? Change the domain?

I totally understand asking for the version (Accepts) during a GET request. If I agreed on that content-type in advance, if I haven't I need to communicate both the url and the content-type along with the version to the clients. We don't have a common container for (url,content,version), things are getting messy. In a package manager, what is the equivalent of a GET request with an accepts header?

    import foo.bar v1.2.3
    import foo.baz v2.1.3

    new_thing = ::v2:foo.baz.new_thing(1)
    old_thing = ::v1:foo.bar.old_thing(2)

Joe Armstrong has a great post on modules and versioning http://lambda-the-ultimate.org/node/5079

I am strongly infavor of immutable code, I think we should be able to import all versions of a library, referenceable by commit hash.


Interesting idea. I usually code with languages that have lib packaging and depencency management systems. Recently I have went back to code on linux C and I quite miss them now. We have pushed the version management to the dependency or build management tools, but it would be awesome to be able to add that meta in code and those tools could also automate the tasks. BTW, wouldn't it be import foo.bar in both? It would be more logical to keep the name of the lib, import it twice, but with different version metadata.

Thanks for the link as well! I would love to see that experimented somewhere and see if it works.


Specifically as it applies to REST, though, and HATEOAS. I don't think it has much bearing to package versioning in Go...


Minimal Version Selection

This is the piece that is very different from other dependency managers and is worth people looking at.

Instead of trying to get the latest version it's going to try to get the oldest. If you want to update to newer version of transitive (dependencies of dependencies) that your dependencies have not updated for you'll need to start tracking those yourself.

There's an issue touching on this at https://github.com/golang/go/issues/24500

Other packages managers use the opposite of minimal version selection. Many of them even have a don't make me thing command to update the whole dependency tree (e.g., `npm update`).

What do folks think of the implications of MVS, especially for transitive dependency management?


> What do folks think of the implications of MVS, especially for transitive dependency management?

I can't think of more than a handful of times over the past twelve years (primarily Ruby and Rust) where a newer package broke an existing one, and the majority of those times the problem was a new major version that wasn't appropriately accounted for (e.g., the author should have depended upon '~> 2.x.x' instead of '>= 2.x.x'). I'd much rather have that problem addressed. Assume semver, and make the easiest way of specifying a dependency cap it to a major version.

On the other hand, I regularly deal with the consequences of upgrading infrequently — when you do upgrade, it's a nightmare. As a rule of thumb, the more frequently you update your dependencies, the less net pain you experience. Upgrading once a week is relatively painless. Upgrading once a year (or even less frequently) is a nightmare due to the sheer volume of changes coming in at once.

Also, having transitioned to the security side of things over the past few years, encouraging stale dependencies just means that security patches will never get incorporated. A stable project that releases mostly security fixes and few feature changes will — in practice — never see its minimum version bumped in projects that depend upon it.

My prediction is that this will just result in security patches not being applied and the problem of upgrading dependencies will be made generally worse. I will be happy, though, if I am proven wrong.


According to my reading of it, there is an update-all operation. It has just been separated out in an attempt to give the user control over initiating it.

But I agree that procrastinating on updates creates problems. (At some point, the entire point of version pinning is to allow you the choice to defer, but it's still a problem.)

I think it might be worth exploring reporting as a solution. Everyone knows that old versions are a problem, but what if I had tools to tell me how bad or good the situation is for my project at this moment? And what if they ran by default either periodically or as part of my build or both?

Examples of stuff it could tell me:

* Am I one minor version behind on this one library and it doesn't matter?

* Or am I a major version behind on this other library and I'm using code that isn't even supported or maintained?

* Are there security fixes that I haven't taken?

* Are there libraries that have security fixes but no release is available yet?

* How about a list of libraries that I'm not on the latest minor version of AND the latest version has been available for more than 2 weeks? (Maybe I don't want to fall behind but I don't want to be a guinea pig either.)

* Or a list of libraries that I'm not on the latest major version of and a newer major version has been available for 6 months?

Since this is important, it would be great to have real visibility into it. Right now, every build I've ever done, this is just something that people track in their heads and just assume they have a good handle on. Doing regular releases and always taking the latest version of everything helps somewhat, but sometimes a release gets canceled. Or maybe there's a system that isn't being regularly worked on and doesn't have regular releases, yet dependencies are being updated, it is behind, but by how much?


> Instead of trying to get the latest version it's going to try to get the oldest.

No, minimal refers to the number of dependency changes between upgrades, not the version numbers of the dependencies. Dependency conflicts initially resolve to the higher of the two versions.

https://research.swtch.com/vgo-mvs section " Algorithm 1: Construct Build List"

In the example, a dependency is upgraded and no longer needs one of the shared transient dependencies. Instead of downgrading it to the version declared by the unchanged dependency, it keeps the higher version that you were already using. When upgrading, transient dependencies are never downgraded, only added (or removed).

https://research.swtch.com/vgo-mvs section " Algorithm 3. Upgrade One Module"


Let's say you have the following picture...

App

--> Dependency A --> Dependency C (at version 1.2.3)

--> Dependency B --> Dependency C (at version 1.3.4)

The latest release of Dependency C is 1.4.5. Which version would be used after running `vgo get -u`?

As you note it would use the newer version of those specified (version 1.3.4). The newer releases after those explicitly specified are not used.


I believe (having tried the tour) that you will get v1.4.5 at the App level, if you do vgo get -u.

If you just do plain vgo get, you will get 1.3.4, because of the minimum version approach.


Mvs upside sounds like that the dependencies of dependencies won't need change. However the downside is that how would you upgrade security patches into older packages? They would always fetch the older dependency and never get upgrades. There's a reason why npm install was made the way it was though it went too far in the other direction with always getting latest dependency.


As Russ points out, the current approach of pulling the latest leads to a situation where a package managers can be quite sloppy. For example, a package may have a stated dependency A@1.1.2. But because everyone is pulling the latest, it may turn out that the package actually no longer works with that version of A anymore.

The vgo approach will encourage package managers to update their dependency versions to what they have actually tested with, which is very useful information for consumers of the package.


> Instead of trying to get the latest version it's going to try to get the oldest.

Err, no. For each library it uses the newest version of the library that is used by any of its dependents (including transitive dependents).

See https://research.swtch.com/vgo-mvs

Especially the line: Simplify the rough build list to produce the final build list, by keeping only the newest version of any listed module.

Different major versions are considered different libraries essentially.

To be honest I thought this was all obvious and I'm sure this approach has been recommended by Go people in the past.

It has another nice advantage over Rust's system - you have have pre-releases of major versions. Rust doesn't really have a way to do that except for releases before version 1.0.

In other words when you want to do some incompatible changes and release version 2 of your library there's no good way to put test versions of it on Crates.io.


> It has another nice advantage over Rust's system - you have have pre-releases of major versions. Rust doesn't really have a way to do that except for releases before version 1.0.

Semver has a specific way to indicate that a feature is a pre-release, and people use it. The second post on /r/rust is advertising a pre-release of the next major version of rand: https://crates.io/crates/rand/0.5.0-pre.0

(Yes, this is before 1.0, but 1.5.0-pre.0 would work just fine too!)


Do you have a reference for that? It isn't mentioned here: https://doc.rust-lang.org/cargo/reference/manifest.html#the-...

I found this bug report and it seems like it has some issues: https://github.com/rust-lang/cargo/issues/2222


Reference: I maintain the semver library. I’m on my phone, so I can’t link you to the docs right now. I’ll add a link tomorrow.


So, looking back, we don't explicitly say that we have these things, we just say that we use semver versions, and the semver spec specifies them. I've been meaning to re-vamp the cargo docs, maybe this is worth explicitly calling out. Thanks!


Ah cool, sorry I wouldn't have asked for a reference if I'd noticed your username!


It's all good! <3


> Other packages managers use the opposite of minimal version selection. Many of them even have a don't make me thing command to update the whole dependency tree (e.g., `npm update`).

vgo does have that too: vgo get -u


That will work for direct dependencies but not dependencies of dependencies. See https://github.com/golang/go/issues/24500


Correct, it will get the newest version that is actually tested against your dependencies, which is by design.

Why would you want to pull a version that is newer than the author of the library you depend on has actually tested with? You can of course force this in vgo, but having the default use the versions specified by the authors makes a whole lot more sense than just using the newest.


> Why would you want to pull a version that is newer than the author of the library you depend on has actually tested with?

1. To install a security update or bug fix update you need in a transitive dependency that the author of the dependency you're using hasn't updated to.

2. To use the same workflows across all my dependency management tools (npm, cargo, composer, bundler, and the rest of the lot follow the same patters and vgo goes against the patterns used by the others)

There are two reasons.


You can do 1), you just have make that an explicit dependency of your own. It just won't grab newest by default for sub-dependencies. So yes, you can pull the latest.

2) seems like a bit of a silly reason, if we all wanted to make everything work the same all the time we wouldn't make much progress or try anything new. Whether vgo's approach is correct or not we don't know yet, but saying that it isn't familiar isn't a good enough reason to not try it out.

To me, vgo matches what we already do in our python projects with a lot of dependencies. Pin everything in a freeze and upgrade on a schedule when we need to. We have seen far too many failures doing it any other way. (IE, using the "latest" of everything, which often either breaks semantic versioning and actually breaks or has subtle bugs that didn't exist before)


Using something like pip-compile, or literally `pip freeze > requirements.txt`?


The latter yes.


3. To apply bugfixes that affect you, but weren't exposed by your direct dependency's tests.


I believe that vgo get -u works for both direct and indirect dependencies.

See the vgo tour : https://research.swtch.com/vgo-tour

The indirect dependency (rsc.io/sampler - which is a dependency of rsc.io/quote), is also upgraded to the latest version v1.99.99 when vgo get -u is done.


To all the skeptics, please try it out and provide constructive feedback. Experiment with the corner cases you think won't work. Judging just by the blogpost won't help anyone.

Personally, I'm very excited and impressed by the ability of the Go team to innovate by diving deep and understanding every aspect of the problem at hand and the existing solutions, instead of just blindly adopting whatever already exists.


The lack of support for private repos is still a dealbreaker for both `dep` and `vgo`.


I wanted to find out, so I looked at vgo's source code, and found that vgo does indeed have some solution to this using Github Access Tokens and ~/.netrc: https://github.com/golang/vgo/blob/b6ca6ae975e2b066c002388a8...


How is it different from using 'go get'?


`go get` has so many other blockers for enterprise use that "support for mirrors" barely registers. But yes: it's no different, `go get`'s lack of mirror support is also a blocker for tons of businesses.


govendor anyway will use local versions of deps if you have them in your gopath


My initial reaction to Minimal Version Selection (MVS) is concern, that developers won't get security updates and bug fix patches in their dependencies applied.

But dependency software has been trending toward the use of lock files for a while now - and without explicit developer intent, those won't get bugfixes either.

I think I'm mostly concerned about how this affects transitive dependencies. My package Foo depends on Bar_1.3.0, which depends on Baz_3.2.4. If Baz gets a security update to Baz_3.2.5, either I need to add an explicit dependency "Baz_3.2.5" to Foo, or wait for Bar to release 1.3.1 that depends on Baz_3.2.5.

If go adds tooling to identify and make these transitive dependency upgrades as easy as "npm update", then I will be a little bit less uneasy.


I don't think libraries have to do this, just applications. As a library owner you're not responsible for making sure downstream gets all the latest security patches. They can do it themselves at will, by running "vgo get -u".

https://research.swtch.com/vgo-tour


The example there dealt with direct dependencies but what about transitive (dependencies of dependencies)? If you have a module that asks for an old version of a dependency what will upgrade that? minimal version selection will keep with the old transitive versions. That's where there concern comes in.


I think the -u flag might already be transitive? Haven't tried it, though.


Exactly. MVS may sounds weird at first, but in practice it works identically to having a Lockfile (which is becoming the standard these days), just simpler.


"But dependency software has been trending toward the use of lock files for a while now - and without explicit developer intent, those won't get bugfixes either."

You know, I wonder if there's something here that a next-generation language can't get in on, some sort of help to provide to the developer who says "OK, I'd like to upgrade this package for people, could you please help me ensure that I'm not going to break anybody in the process?"

Possibly this line of thought terminates in very richly dependently-typed languages, which is a bit of a utopia. But perhaps there's something in between? Or something that can be added to an existing language like Rust?

I'm not even initially certain what that would look like. A version-aware programming environment in which one can sensibly say "Yes, for 1.1 I upgraded the unit test but please run the 1.0 unit tests against the 1.1 code"?

It seems like this is a growing problem and there's probably an opportunity of some sort here.


> You know, I wonder if there's something here that a next-generation language can't get in on, some sort of help to provide to the developer who says "OK, I'd like to upgrade this package for people, could you please help me ensure that I'm not going to break anybody in the process?"

Russ has proposed a "go release" command that is intended to help with that process. It's probably simple right now, but has lots of room to grow in that direction.

See: https://research.swtch.com/vgo-cmd


> It seems like this is a growing problem and there's probably an opportunity of some sort here.

I think there's an opportunity even within existing languages: more shared CI infrastructure. Imagine if project authors had some easy way of running their downstream consuming project's test suites as they develop?


It's not all the way to what you're proposing, but CPAN has a very nice means of testing packages for system compatibility. By default, users installing new packages run the tests for those packages. Most CPAN clients can be configured to report those test results back to central locations. This isn't the same as running the tests for the consuming project, but it demonstrates the feasibility of "crowdsourcing" tests for system/language-runtime-version/other-package-version incompatibilities.


I'm honestly confused about why "this package manager is a SAT solver" is being trotted out as a bad thing. Repeatedly. Having used multiple such package managers in the past: the runtime is utterly dominated by time-to-download or even simply disk access, not time-to-compute. Compute time has uniformly been far below human-visible times - e.g. a few hundred dependencies resolves in `dep` in around 100ms (for the solver) on my machine.

SAT is not a problem at all. Yes, you can construct a worst-case scenario for it that will chew up a ton of CPU. In practice it simply doesn't happen, and trying to defend against it is both a waste of effort and leads to crippled decision-making.


I think it has more to do with the complexity. Minimum version selection is trivial to implement and understand. The processing time is a side-note.


MVS is less complex to code, definitely agreed. But given how well studied and broadly implemented SAT is, I don't really think that's a useful distinction to its end goal - be a useful build system. Chucking the whole transitive-version-respecting thing is even simpler[1] but it's terrible and Go shouldn't do that just because it's simple.

And for "understandability" in a conceptual sense, personally SAT has always felt simple and predictable to me - "find something within all bounds or err" is something we do all the time by hand.

[1] e.g. pip essentially ignores version constraints if something's already installed or some other lib already mentioned it. it's absolutely terrible and causes many problems in anything long-lived.


A non-SAT system is understandable in the sense that you can, as a human, look at it and understand what the solution is should be. That's kind of cool.

It also seems within the realm of imaginable that having an easy-enough-to-implement scheme that you're not falling back on writing heuristic backtracking search might result in more tools being written.

Obviously, you are right that being NP-complete is clearly not a showstopper or a huuuge problem, in practice. Still, seems nice to avoid if you can!

By the way, that's interesting to hear about pip. I do my best to avoid python, but I inevitably end up reluctantly wanting to use some tool written in it, and inevitably it breaks. What a fucking shit-ass ecosystem.


Re Python: pretty much :| My personal favorite is that, because of this behavior, you can install a single top-level package into a fresh virtualenv and get invalid transitive dependencies. Both in theory and in practice.

If you ever get back into Pythonlandia, do check out pip-compile - it's a properly sane package manager, following the normal SAT solving path. Major lifesaver, 100% recommended.

I enjoy the language well enough - it's readable and expressive. But it's so terrible for building a business on top of, and that's largely due to the ecosystem.


This is not my experience at all. Doing an “apt dist-upgrade” after not upgrading for a week or two regularly takes _minutes_ on my i7-8700K to resolve dependencies.


Apt doesn't use a SAT solver it only removes conflicts if it finds them: https://aptitude.alioth.debian.org/doc/en/ch02s03s02.html


I've never seen this. Do you use only packages from your distro ?


Yes, Rich Hickey (amongst others I am sure) has talked about precisely this model of backwards compatibility - i.e. if you break the contract you have to rename the thing.

This feels far better than the current model used in most languages. If you've ever had struggles creating an uberjar you know this pain.


Agreed, after years of JVM dependency hell (which happens in any language), I think "never make breaking changes" should be non-negotiable for publicly-published code.

E.g. repo managers like Maven central/etc. should use binary API analysis to reject any jar upload that has breaking changes.

My only hesitation is that, AFAIK, semantic import versioning has never been tried at scale, so having to constantly bump imports from "com.foo.v1" to "com.foo.v2", and deal with "app1 wants to pass com.foo.v1 objects to app2, but it expects com.foo.v2 objects" might introduce more pain than expected.

Granted, right now app1/app2 are blithely passing around "com.foo" objects that may/may not be compatible, but if it's an 80/20 thing, or 99/1 thing, and most of the time you get lucky and it works, perhaps that's good enough.

But would be great to have go be the first community to try this at scale and see how it goes. I like it.


Russ actually mentions Rich Hickey's keynote talk, Spec-ulation (https://www.youtube.com/watch?v=oyLBGkS5ICk) in one of his blog posts about vgo


Well, the article is suggesting to put the major version number in the name, so that “change the name” and “bump the major version number” are equivelant actions, which is a little more subtle than just a plain rename.


Okay, so i'm reading the minimal version selection algorithm article [1], and there's either a flaw, or something i don't get.

Here's a modified version of the example graph, written as pidgin DOT [2]:

    A -> B1.2
    A -> C1.2
    B1.2 -> D1.3
    C1.2 -> D1.4
    D1.3 -> X1.1
    D1.4 -> Y1.1
The key thing here is that D1.3 depended on X1.1, but the new D1.4 depends on Y1.1 instead. I guess X is old and busted, and Y is the new hotness.

What is the build list for A?

Russ says:

The rough build list for M is also just the list of all modules reachable in the requirement graph starting at M and following arrows.

And:

Simplify the rough build list to produce the final build list, by keeping only the newest version of any listed module.

The list of all modules reachable from A is B1.2, C1.2, D1.3, D1.4, X1.1, and Y1.1, so that's the rough build list. The list of the newest versions of each module is B1.2, C1.2, D1.4, X1.1, and Y1.1, so that's the build list.

The build list contains X1.1, even though it is not needed.

Really?

[1] https://research.swtch.com/vgo-mvs

[2] https://www.graphviz.org/doc/info/lang.html


I believe that implicit in this scheme is that you cannot remove dependencies. It seems possible to add new dependencies, however. Changing dependencies can be seen as removing one and adding the other, but you can only do one, not the other. This limitation could probably be patched up by resolving a set of versions and then noticing that X1.1 is unreachable and discarding it. The trouble with that fix would be if X1.1 forces some other package to an unnecessarily high version even though it ends up being discarded.


Why not just put the full version number in the import path?

Both the manifest (dependency section) and lock files become unnecessary. It’s DRY.

Dependencies are specified where they’re used, improving componentization.

Upgrading and forgetting to update one location is easily fixable: tooling already scans go code for a list of imports, modify this to warn on version differences. Or even to update them.

Git history gets “cluttered”, but shouldn’t it be? The behavior of values in a file is changing. This constitutes a change of requirements on the file’s code, or at least needs a moment’s review to decide the code needn’t change. Seeing that change in history would make tracking down any bugs it causes easier. Besides, we’re talking more files changed in a single edit, not more edits.

Semantic versioning is a qualitative description, not a guarantee. Due to edge-case use or human error, every minor or patch update may be a breaking change. It would be better to have a layer of human interpretation between semantic versions and code changes, rather than tooling that assumes them to always be correct.


A published library /is/ an interface specification.

Fixing bugs or extending the interface (with additional and possibly corrected parameters) is one thing.

If the interface specification ever needs to have backward incompatible changes, the /library/ needs to be renamed (or at least have a different leading import path).

If something doesn't work, development's answers is always going to be "test it with the latest version" (at least shipped, if not the 'git HEAD').


It doesn't scale. If you have a->b->c and b version-bumps every time c does, then this also means a has to version-bump. So basically changing one file means everything that depends on it, even indirectly, has to change.

Furthermore, if you have diamond dependencies, a might end up version-bumping more than once.

Regular edits would get lost in a sea of version-bumps.


If C changes and B doesn't reflect the change in its own behavior, then there's no need for a new release of B. Just update it and group it in with the next normal release.

If C changes and B's behavior does reflect it, then A does need to know about it.

The "sea" is limited to times that direct dependencies change their behavior, which is what it already is.


You're assuming that B does regular releases. But if B is "done" it might not happen for quite a while.

Suppose C has a security patch, and B doesn't have any new functionality to release? Does it keep using the version without the security patch forever, because that's what's listed in the import statement? Or does someone have to update it?


I include “security patch” in “behavior”, subject to both conditions above.

If B isn’t otherwise updated at a similar time, then yes, I advocate dependency update-only releases. I think this is as it should be. A project should be able to know what code it runs, specifically.

And yes, if a project is abandoned then security patches don’t get magically applied. Again, as it should be. The fact that you have an unmaintained dependency is itself a problem. You can’t just auto-apply security patches and expect things to keep working, ask any Linux distro. What needs to happen is a fork (or dropping the dependency). Tools warning you encourages this; silently auto-applying patches encourages everybody to separately do their own fixes and workarounds.


I don't think it's reasonable to require each library owner to do regular updates, whether or not they have any changes to make themselves. This is unnecessary busywork. Library maintainers should only be responsible for fixing their own bugs, not responding to everyone else's bugs.


I like the idea too. Something along the lines of import "fmt#1.2.3". The problem is that you do repeat yourself: you have to sync the version across any files that use it.


It's not repetition if it sometimes varies, as the article discusses.

It's certainly a common case that they're all the same, and we can handle that by augmenting go, which already scans the project for import lines, to warn on different versions, or even update them in the import lines.


I am very happy that Russ Cox et al. are still brave enough to propose new ideas when they see something not working as intended.


The explicit `v2` in import paths is a case where Go has abandoned conventional abstraction and information hiding in an area where maybe there didn't need to be any in the first place.


Sam boyer had a take on this which is worth reading considering he is the current developer of dep: https://sdboyer.io/blog/vgo-and-dep/


To preface this I have not looked into the internal details of vgo, so it could be the world's most interesting solution to this problem. I feel like this is a punch in the gut for a lot of people that spent tons of time looking at ways to solve dependencies in Go. From his post, it looks like Russ never really looked into dep closely as he gave it his "blessing" during gophercon. It seems like whenever there is an issue that gets brought in a proposal (not always but it seems like a lot) it gets shot down by the "higher ups". Not until it gets their attention, or it becomes a big deal to someone (Cloudfare) then is it important. Look don't get me wrong, vgo could be a great technical solution, but disregarding the work of those who actually cared about this issue before you did is a big mistake. It will alienate people in the community who care on improving the ecosystem for those outside of Google. I want to thank all of those involved in dep, it's a great tool that solved my dependency problems! Without you, we would not be having the discussion about vgo and a better way to dependency management in Go.


It's tough, but sometimes you need to throw away work. It's better ultimately in the long run. I think Russ has been doing a pretty good job of giving recognition and credit to the dep folks for their exploratory work.


Saying that dep and prior art has been ignored or not "looked into" is unfair. Russ is very clear about where dep and similar tools fall short of a perfect solution. Not only that, but Google has contracted Sam to help with vgo, since he presumably has the most Go experience in the matter.

I'm sure it's disheartening for people who worked on dep. I've had a great experience with it too, especially after the .4.1 release.

While not everyone agrees, one of my favorite things about Go is that every decision is well thought out in terms of its effects on the entire ecosystem. Just because some other language or toolset does something isn't a good enough reason to force it into Go. I trust the overlords, they've been good to me so far. :)


I want to know what they consider the limitations of the cargo/rust approach. It has been really nice to use as a user.


To me, the one thing I'd like to change about Cargo is that required initial clone of a 100MB+ git repository before you can even install something.

IIRC this was because a libgit2 issue preventing them from doing a shallow clone though, so there's no way around it for now.

Disclaimer: I use both Go and Rust on a daily basis and think both are nice in their own way.


That’s more of an implementation detail though. The design does not demand that. I assume that no concern initially when the ecosystem was tiny.


I agree, but it's not a detail in practice. My feeling is that Rust is very attentive to theoretical details, and Go to practical details. Of course it's an over-simplification, and both approaches are pertinent and complementary ;-)


That's the exact opposite of what the grandparent is talking about, though. Cargo was the one saying "this works for now, we'll fix libgit2 later", which is firmly in the practical camp. vgo is the one saying "we can't emulate other package managers because SAT solvers are slow", which ignores that in practice they're not, valuing strictly theoretical considerations instead (and, in practice, Cargo doesn't even use a SAT solver anyway, so they didn't do their homework).


I meant that Go usually focuses more on solving practical issues than theoretical issues. But I have to agree that it is the exact opposite in the example I replied to ;-)

Yes, Cargo doesn't use a SAT solver, but Cargo' source code acknowledges that "solving a constraint graph is an NP-hard problem" and uses "nice heuristic to make sure we get roughly the best answer most of the time". [1]

It's not just a theoretical consideration. It can create real problems. See for example "Abort crate resolution if too many candidates have been tried" at https://github.com/rust-lang/cargo/issues/4066. I'm not saying it's big issue, but it's something to consider in the design space and this is why the Go team is considering other options.

[1] https://github.com/rust-lang/cargo/blob/master/src/cargo/cor...


Again, in practice, this has not created real problems, which Russ Cox seems to fail to appreciate. I've been using Cargo for years, working with other programmers for years (including programmers using large Rust codebases in production at large companies), and teaching programmers new to Rust (both online and off) for even longer. The number of times I have had crate resolution abort, or found the heuristic-chosen dependencies undesirable, or seen any other person ever complain about either of the former: zero. My sample size is not small.

I respect Russ Cox's decision to favor different considerations for Go's versioning story. The approach of constraining to minimal versions is not bad, merely different (especially since the -u flag exists). But the framing of this as solving some problem with existing package managers is simply mistaken, as Russ would know if he had used these tools in practice, rather than instinctively reeling at the theoretical implications.


> in practice, Cargo doesn't even use a SAT solver anyway, so they didn't do their homework

It's not fair to accuse people of not doing their homework when they actually are...

Russ Cox published "Version SAT" in December 2016. [1] The article specifically mentions "Rust's Cargo" which "uses a basic backtracking solver".

[1] https://research.swtch.com/version-sat


Rust might have theoretical problems but it doesn’t matter in practice. That’s all that matters to the Rust community.


See https://news.ycombinator.com/item?id=16423049 for a discussion between rsc and Rust developers.


Quoting the article:

> […] the import uniqueness rule: different packages must have different import paths. […] Cargo allows partial code upgrades by giving up import uniqueness.

> The constraints of Cargo and Dep make version selection equivalent to solving Boolean satisfiability, meaning it can be very expensive to determine whether a valid version configuration even exists.

> eliminates the need for separate lock and manifest files.

Note that Cargo is seen as a "gold standard" approach here, upon which rsc is trying to improve.


> It is of course possible to build systems that use semantic versioning without semantic import versioning, but only by giving up either partial code upgrades or import uniqueness. Cargo allows partial code upgrades by giving up import uniqueness: a given import path can have different meanings in different parts of a large build. Dep ensures import uniqueness by giving up partial code upgrades: all packages involved in a large build must find a single agreed-upon version of a given dependency, raising the possibility that large programs will be unbuildable. ... Semantic import versioning lets us avoid the choice and keep both instead.

It's a useful exercise to be critical of existing systems and see if there are opportunities to improve that they missed. That's how progress happens.

At the same time, there is a common fallacy (especially among very smart people) that this might be an instance of. Even if it isn't and the Go folks came up with something brilliant, I think it's worth talking about because it occurs a lot elsewhere. I'll call it the "Missing Third Feature".

Cox's claim boils down to. "X gives you A but not B. Y gives you B but not A. I've just come up with Z which gives you both A and B, so it's superior to both."

In many cases, though, the reality is. "X gives you A and C but not B. Y gives you B and C but not A. I've just come up with Z which gives you A and B, but not C (because I'm likely not even aware that C exists)."

I could be wrong, but if I had to guess, C is that vgo has no ability to express compatibility across multiple major versions.

Let's say my package foo depends on bar. The maintainers of bar add a parameter to a function in it as well as fixing a number of other bugs. That signature change is a breaking change, so they rev foo to v2. My package bar does not call that function. In order to get the bug fixes, though, I have to change all of my imports to use foo/v2. Anyone using bar must either fix all of their imports of foo to v2, or end up with two versions of foo in their application (which might cause weird behavior if values from one foo end up flowing to another).

The key problem is that from a package's own perspective, "breaking change" is defined conservatively — any change that could possibly break even a single user is a breaking change and necessitates a major version bump. From a package consumer's perspective, many nominally breaking changes do not actually break that particular consumer. Package managers like Cargo let you express that gracefully. My package foo can say, "I depend on bar and work with bar >1.2.3 <3.0.0" if I know that foo isn't impacted by any of the potential breakages in bar 2.0.0 or 3.0.0. That in turn lets my package be used in a greater number of version contexts without causing undo pain to my consumers.

This may not turn out to be a big deal. It's hard to tell. But my general inclination when I'm designing a system is that if my idea seems unilaterally better than the competition in all axes, then I strongly suspect those systems have a feature that I am oblivious to and that mine lacks and I try to figure out what it is they know that I don't.


> "Missing Third Feature"

Very good observation. I also see this relatively often, but had never seen such a clear explanation of the phenomenon and it is great to have a name for it.

> C is that vgo has no ability to express compatibility across multiple major versions

This is true. However, I think (hope) it will be a non-issue. Some tool may easily take charge of modifying import paths and go.mod files from v1 to v3 (to follow your example). This should not be, in practice, more difficult than adding the <3.0.0 constraint in the lock file.

On the other hand, if you are importing from your foo package some other package baz that also uses bar, and baz requires bar v1 because it is using that function you did not care about, you may have problems using a lock file. Either you will import the wrong version, breaking the build, or you will have two packages with the same import path and different major versions.

It is hard to predict how all this will work in practice. Although I do not think this particular example will be a problem, I agree it is important to keep our eyes open for that "Missing Third Feature".


> Some tool may easily take charge of modifying import paths and go.mod files from v1 to v3 (to follow your example).

No, in my example foo works with all of versions 1, 2, and 3 of bar. vgo has no way to express that. You have to pick a single major version. That's a drag because it means anyone who wants to use foo now has to adopt this artificially narrow constraint. Maybe they want to use bar 3.0.0 for other reasons but I put bar/v2 in my imports in foo.

It means there are many valid package constellations that would work, and where package maintainers know they would work, but the package manager isn't smart enough to understand them.

> if you are importing from your foo package some other package baz that also uses bar, and baz requires bar v1 because it is using that function you did not care about, you may have problems using a lock file. Either you will import the wrong version, breaking the build, or you will have two packages with the same import path and different major versions.

in practice what this means is that the version solver picks another version of baz until it finds a set of versions where everything is happy. This is hard (NP-complete) in theory, but in practice it works out really well. It gently incentivizes package maintainers to check and make sure that their packages work with the latest versions of their dependencies, which in turns keeps the ecosystem healthy and moving forward without being forcibly dragged forward.


Ah, yes. Sorry, I misunderstood your example, but it is clear now. This is indeed a potential issue and it may turn into a real problem.


For me the missing third feature is the decoupling of version resolution from version recording that this approach gives up. Russ Cox presents vgo not needing separete manifest and lock files as a good thing, but I think that having those be separate is a crucial feature. In a system like Cargo, if I want to reproduce a past state of some software system, as long as I have a Cargo.lock file, I don't have to know or care how those versions were chosen. They could have been chosen by a SAT solver or a soothsayer – it doesn’t matter; to reproduce that exact software state, I just install the recorded versions. In the vgo proposal, on the other hand, these are inextricably coupled: in order to reproduce a software state, I need to apply the exact same “minimal version selection” algorithm. If it turns out that minimal version selection has big issues and needs to be abandoned or even just tweaked, that means that old software configurations become irreproducible because reproducibility depends on the project requirements and the algorithm and not just an explicit lock file.


"My package foo can say, "I depend on bar and work with bar >1.2.3 <3.0.0" if I know that foo isn't impacted by any of the potential breakages in bar 2.0.0 or 3.0.0. That in turn lets my package be used in a greater number of version contexts without causing undo pain to my consumers."

If I'm following the conversation correctly with vgo, the idea is that if you've got one bit of code that speaks bar 2.0, and another bit of code that speaks the incompatible bar 3.0, they'll be able to coexist in a single executable. Both bars will be compiled in, and anything that uses each of them will be linked correctly. That's one of the reasons why the v1/v2/v3 in the URL is important, these really are different packages. The dependency solver isn't trying to solve for the entire program with a single version. The types won't be able to cross if the package hasn't made provisions for that, of course.


It's my understanding that this is up to the maintainer; that is, if you release a major version, you can also backport bugfixes to the previous version.

You can also import the new version from the old one, or vice-versa. So one approach would be to replace the old version with a wrapper that calls the new version, but exposes the old, backward-compatible API. (This would be a good idea if there are singletons.)

It's implicit in this scheme that major version bumps should be rare; you don't want to have to do this all the time. Presumably, library owners should save up breaking changes and do them all at once? But you could have multiple versions active at once, if you had to.

It's also the case that people often sneak in minor compatibility breaks without doing a version bump. (For Go this is happens pretty often for adding new fields to structs, which breaks embedding.) It's often a matter of what you can get away with without breaking most clients, rather than what's strictly a breaking change without considering usage.


I can see how that would break linkage, but if you're re-compiling everything that changed how is this actually an issue?

Or is the issue that the broken code didn't depend on a publicly exposed part of the library's interface but instead copied something it shouldn't have?


See "adding a field to a struct" on this page:

https://blog.merovius.de/2015/07/29/backwards-compatibility-...


We have just moved to using dep, yes, it exposed some issues in our dependency tree, but it works now and I don't think our organization will be looking to move anytime soon.


I feel like this is the package management equivalent to the shift from geocentrism to heliocentrism. Yes, you can approximately predict the movements of the planets with the geocentric model and lots of epicycles (Bundler/npm model), but it suddenly makes so much more sense and it's so much simpler if you assume the sun as the center of the solar system.


Having just built my first project in Go, I'd be happy if some form of standardization is introduced. Everything I read was mostly along the lines of "well there's this community standard process we use that's not official but it might as well be". But when I started installing packages, it was clear that the result of this lack of focus on package versioning meant that people were simply pulling packages from master or the latest release on github depending on which tool you used.

Examples of how I had to deal with packages:

Yaml parsing - Used gopkg.in/yaml.v2 . What that means? I don't know. v2 could be one git hash today, and another tomorrow.

Slack SDK - Used nlopes/slack. dep ensure -add dumps me on to the latest release of Github. That was fine although I didn't realise that was what happened. The latest release is 12 commits behind master at the time of this writing

A CLI framework - Used urfave/cli. Same thing with dep ensure -add. Except I realised that the last release to Github was eons ago. August 2017 IIRC. The code I needed in particular had been released in October 2017 and since then had many commits to master. I just added a constraint to my Gopkg.toml file to lock the pulling to master.

The point is that in all 3 of these examples, each person had an entirely different way of doing things. All 3 libraries are mature in terms of functionality and the people behind them being experienced go devs. I'm not entirely sure where the "Independently, semantic versioning has become the de facto standard for describing software versions in many language communities, including the Go community." statement comes from (the including the Go community bit). Because that was not my experience.

If this tool introduces some clarity and helps push the community towards developing and releasing more sensibly I'll be really happy. Otherwise I foresee that instead of using semantic versioning, I'll be locking to different git commit hashes to choose which API version I want.


> Yaml parsing - Used gopkg.in/yaml.v2 . What that means? I don't know. v2 could be one git hash today, and another tomorrow.

I'm sure that you can use a fixed git hash if you want.

> Slack SDK - Used nlopes/slack. dep ensure -add dumps me on to the latest release of Github. That was fine although I didn't realise that was what happened. The latest release is 12 commits behind master at the time of this writing

Do you expect the author to tag a release every time he pushed to master? And you can use master branch if you want.

> A CLI framework - Used urfave/cli. Same thing with dep ensure -add. Except I realised that the last release to Github was eons ago. August 2017 IIRC. The code I needed in particular had been released in October 2017 and since then had many commits to master. I just added a constraint to my Gopkg.toml file to lock the pulling to master.

It is barely 6 months old! How often you expect the author to release new version?


I'm not faulting anyone. Or any of the projects. I'm pointing out the fact that standardization of package version mangement from the package maintainer's side is something the Go community hasn't agreed on.

All your replies describe workarounds. Use a git hash. Use the master branch. I did those. The point is that even if we haven't figured out package management within the software industry, we do have more mature practices than this.

Do I expect the author to tag a release every time they push to master? Kind of actually. It depends on what's going into master. If it's typo fixes. Sure that can wait. If it's major bug fixes, that probably should be a release. If it's brand new features added, then yes! It has to be a release!

Right now, people are treating each commit to master as a new release.

I'm not angry or railing against anything. I'm an outsider who went through a lot of strange things in my first project in Go. I think this line from nlopes/slack README highlights the general package management methods:

"v0.2.0 - Feb 10, 2018

Release adds a bunch of functionality and improvements, mainly to give people a recent version to vendor against."

All I'm saying is that it'll be great if the community does figure this out and settles with a single agreed on practice.

---

One last example:

stretchr/testify is a library I considered using for testing. Their README encourages people to depend on the master branch. And even this PR acknowledges the issues that has caused: https://github.com/stretchr/testify/pull/274 .

I agree that there are workable workarounds. But that's exactly what they are. Workarounds.


> A CLI framework - Used urfave/cli. Same thing with dep ensure -add. Except I realised that the last release to Github was eons ago. August 2017 IIRC. The code I needed in particular had been released in October 2017 and since then had many commits to master. I just added a constraint to my Gopkg.toml file to lock the pulling to master.

What's wrong with spf13/cobra?


> exemplified by Rust’s Cargo, with tagged semantic versions, a manifest, a lock file, and a SAT solver to decide which versions to use.

I think Cargo got this from ruby's bundler, the first modern tool to do all this? Why does cargo get the credit! haha.


I think it's because Cargo was the blessed solution baked into the paradigm from the beginner. Rust get's credit for a lot of things it shouldn't (well, not credit for being the first), because it pulls many well done things together nicely. It might not be novel in a lot of areas, but is seems like it's got a lot of polish.

Or, perhaps it's brought up as example because Rust occupies a space a lot more similar to Go than Ruby does.


The right people are getting the credit, at least: the team that originally built bundler also created cargo.


Yehuda Katz wrote both, so he gets the credit either way. :)


I was recently tasked with learning Go.

Overall I liked it, but the lack of versioning on packages was the deal breaker for using it for any big project. The bigger the project, the more dependencies, and the more things that can go wrong upstream.


Why won't the simple approach of not allowing packages to break API compatibility work? Fixed some bugs or added new functions to a library foo? Keep calling it foo. Made an incompatible change? Call it foo2.

At least, this encourages backward compatibility. Incrementing the version number is an easier decision to make than explicitly creating a whole new thing. One of the awesome features of Go is how stable the language and standard libs are. Why not expect the same of the community?


MVS requires import path compatibility. For semver-major changes the /v2/-style workaround is OK, except when a package is still v0.

What happens in vgo if MY_APP depends on A and B, which both depend on different, incompatible v0 tags of C?

I understand that A and B can declare which v0 they want in go.mod, but it can't be solved for MY_APP.

This unfortunately isn't hypothetical...


I literally just switched to dep.


dep is still the officially recommended way to go for now. Russ Cox has mentioned in his blog posts on vgo that they will be working hard to ease the transition from dep to the built in versioning tools.


Which means at some point, we have to switch from dep to vgo. although the transaction could be smooth, but the methodology behind it must have changed, and that would not be a smooth transaction for our mindset.


Sounds like this is still a ways out right?


Much love to the author for clearly thinking through. As GIT was to DVCS, and SemVer to individual package versioning, I feel this proposal is "the one true way" as a starting point by which all future progress will be judged.


> As GIT was to DVCS...

?

Git did not create a novel concept. For one, it was written nearly contemporaneously to Mercurial, which is DVCS too and uses a very similar if not identical model. SemVer is a similar story too. It was not like before it people were using /dev/random to number their releases.

I really like the fact that Go is catching up in this area, but I fail to see the revolution everybody's talking about here.


I think 99% of comments are wtf... they should have copied package management xyz 5 years ago.


Which is how you don't find better solutions.


I don't feel like Git is "the one true way" to DVCS though. For example using SHA1 as hashing algorithm doesn't seem like a good idea. Or calculating "packfiles" ad-hoc on clone operations, that's wasteful and makes cloning big repos on unstable connections a PITA.

But it's certainly true that Git is the "current best practice" in the DVCS landscape.


What I don't understand is why Go is not simply copying something from another community. And yes it will be incompatible with the "old way", but what is the "old way" with such a young language anyways. But then at least you don't need to learn the basics again that have been solved a hundred times already. Just copy&paste and then move on to other topics.

And while you are at it, also add real exception handling instead of asking people to reraise exceptions under each and every function call. And instead teach people to actually handle most of the exceptions from underlying libraries, maybe reraising a more context-appropriate one on the current abstraction level.


> What I don't understand is why Go is not simply copying something from another community

The proposal explains this pretty clearly, using Cargo as an example, and listing what the Go team sees as its shortcomings (at least for golang). Is there an alternative extant approach that you think they've missed?

> And while you are at it, also add real exception handling

They're not going to do this. There are philosophical disagreements between advocates of different approaches here, and Go has very clearly picked its side. https://blog.golang.org/errors-are-values is a good exposition of the case. If it doesn't convince (or at least mollify) you, you probably should either suck it up, or avoid Go where you can.


> The proposal explains this pretty clearly, using Cargo as an example, and listing what the Go team sees as its shortcomings (at least for golang). Is there an alternative extant approach that you think they've missed?

That IS what I claim is wrong. There is no reason to argue about it. Yes, each existing solution has a problem. But so will their final solution. The differnce is that an existing solution doesn't need much time to invent, it's already invented. And you also already know what its shortcomings will be so you can handle them as well. The problem is having the arrogance to come up with something better given another 10 years. Unlikely and too huge an investment.

> There are philosophical disagreements between advocates of different approaches here, and Go has very clearly picked its side.

What is this discussion anyways? When I see go tools reraising errors from their underlying libraries, that is a clear, community wide bug. The user shouldn't care about what library you use, and certainly he shouldn't know your choice of library's ins and outs.

Also it's very clear that if a 1 line function call has at least 3 more lines of code, that it is more spammy than Java.

Yes, there are philosophical differences, but go didn't choose any of them but instead did something that is worse.

In some regards it's the same as with package management. The choice was taken to solve a problem that was solved a thousand times already, and they had the hope that investing another 10 years of developer time they would come up with something better. Well, as statistics suggests they failed, badly.

Would you invest 10 years of a whole community to invent a machine that keeps your milk cool? Probably not. You would probably buy a fridge and be done with it.

> you probably should either suck it up

Where does this arrogance come from? Are you 15 or something? A grown up person is confident when they achieved something, not when they failed at reinventing the wheel, which many people told them is a waste of time.


You seem terribly emotional on this relatively trivial topic. I'll leave you to it.


It would be way more persuasive if they didn't fall back to panic/recover semantics that are exceptions.


panic/recover is a bit different from exceptions because it can be used as a control structure (like if/for/switch) inside a function. Rust has something similar with panic!.


I’m not sure how that is different than try/catch/finally?


The main difference is that you cannot catch a specific block of code in the function body. You can only catch when leaving the function (in a defer clause in Go). try/catch/finally is really a control structure. panic/recover is not. But I agree the difference is not clear cut and not easy to comprehend without coding a bit using both approaches.


It says it is reproducible but is depending on mutable git repositories? In an earlier post they mentioned proxy servers to have some kind of cache, will there be an official one?


Well, you'll always have to trust some point on being immutable.

A git repository can mutate, but so can a central registry or a proxy. Once a version is cached in a registry or a proxy, nothing stops people with enough privileges from modifying that cached version.

So really the only thing that changes is just who you trust.


crates.io is immutable, you can't edit/delete published versions. And npm is as well finally.

> So really the only thing that changes is just who you trust.

With an immutable central registry, I only need to trust the organization running it. Without it, my build depends on the maintainers of all my dependencies (including transitive) to build. In the case of Go that would be trusting Google versus trusting an unknown number of random persons.

Even recently in Go there was this event: https://www.reddit.com/r/golang/comments/7vv9zz/popular_lib_... where someone deleted their account and someone else signed up and created another repository with the same name. I believe the Minimal Version System will ensure that no one gets automatically updated to a new version of potentially bad code there though.


If you want an artifact repository for Go, several exist with varying levels of immutability/reliability/etc.

However, Go also has an answer to the situation of "I want dependencies directly from the community without a caching/authorizing intermediary, but don't want to be a victim of someone rewriting their history on GitHub/account-squatting/whatever", and that answer is vendoring plus checking diffs on package upgrade. That's often less convenient than using a trustable caching registry, but it's comparing apples and oranges: a cache like that requires a centralized solution at present, though usable distributed ones might come about at some point; vendoring just requires some of your disk space and a git clone.


> nothing stops people with enough privileges from modifying that cached version

They may be able to modify it but with some kind of trusted hash at least I'll know about it. Making a change to a repo that keeps the same version and also keeps the same hash is a more difficult attack to pull off.


Please see : https://research.swtch.com/vgo-repro - reproducible, verifiable builds. The same thing is possible in vgo, with planned support for hashes.


You can commit a go.modverify file in your project which contains a digest of each module used in the project.


I don't get this. The foundation upon which semantic versioning is built is that the entire semantic version is handled outside the code itself, which allows different packages to communicate how they adhere to a common standard compared to each other. For instance, if I depend on A at 1.0.2, and I depend on B at 2.3.0, even if A and B have nothing to do with one another, as a consumer of A and B, I have a common contract with both of them, which makes it easier to resolve my dependency requirements. In this case, it doesn't matter if B's author never de-facto issues patch releases, because my dependency resolver tool treats them the same.

If I break that foundational assumption by putting the major version directly in the code, i.e. "semantic import versioning", what benefit do I have to track minor and patch separately? Why not just replace them both with a timestamp?

As a library packager, I fit one of the following situations:

* I don't bother with minor or patch versions at all, every release is denoted as possibly breaking. Timestamps mean nothing to me since every major release is effectively a timestamp anyway.

* I don't bother with patch versions, only minor versions. Same as before, each minor version is effectively a timestamp anyway.

* I don't bother with minor versions, only patch versions. Also effectively a timestamp.

* I bother with both minor and patch versions, but after I increase a minor version number, I never issue new patches for old minor versions, e.g. 1.0.0 -> 1.0.1 -> 1.0.2 -> 1.1.0 -> 1.1.1 -> 1.2.0. This is effectively a linear release pattern and therefore can be replaced with timestamps (if your software depends on 1.0.2, and the library packager issues 1.1.0, which is not supposed to break compatibility with 1.0.2, then why wouldn't you upgrade?).

* I bother with both minor and patch versions, and I'm committed to releasing patches for old minor versions for a certain amount of time, and therefore my linear release history conflicts with timestamps, i.e. if I release in the order of 1.0.0 -> 1.0.1 -> 1.1.0 -> 1.0.2, despite 1.0.2 having a later timestamp than 1.1.0, consumers of 1.1.0 should not "upgrade" to 1.0.2 since it may break compatibility with features introduced in 1.1.0. Implicit in this versioning practice isn't just the willingness to patch old versions of the software, but the recognition that at least a subset of your users desire to reduce risk as much as possible by avoiding even seemingly non-breaking changes and yet not avoiding patch changes. Is that even rational? Should dependency management for statically-linked binaries (not dynamically linked binaries where patch version updates can be introduced into an environment with a pre-existing binary, not networked APIs where rolling back minor versions can impact other systems using that API, not to mention the effects on backing datastores...), the statically-linked binaries in the end either will or will not compile and pass a test suite, so should dependency management for Go even bother to support this use case?

Why not have "semantic import versioning" with different major versions supporting different import paths, and then keep track of the timestamps of each dependency as it was used, with some "go dependencies update" checking for new timestamped versions (remember, none of which should result in breaking changes), keep track of which timestamped versions are used in version control and allow builds without updating the timestamps. This way a) gets you reproduceable builds, b) makes it easy to check if updates will (inadvertently, through a mistake of the library packager) break your build, and roll back if necessary, c) takes advantage of Go's property that it creates statically-linked binaries to ensure that such builds are safe in spite of the reduction in resolution vis-a-vis melding minor and patch versions into one.

Dependency resolution is then a trivial matter of compiling a list of all major versions semantically imported in the codebase, throwing them in a file matched with what were at some point the latest timestamp, and optionally checking for and updating the timestamps in the dependency resolution file. If no code references a given major version anymore, remove it and its timestamp from the dependency resolution file.


Isn't yarn locking system a proof that semantic versioning has failed?


Not at all. Fingerprinted lockfile systems (yarn or otherwise) provide separate benefits from semver:

- The potential for deterministic builds. Saying "my system will work after update because all of the updates are minor versions" is a big benefit provided by semver, but saying "it will work after update because the bits of the built files are the same" is a lot more assurance. "No Trespassing" signs haven't "failed" because door locks and fences exist; it's just different levels of effort/assurance.

- Security/package verification.

- A speed advantage in dependency fetching. Using a lockfile, you no longer have to ask the registry for something that matches your semver pattern, or make multiple round trips. For a single package that advantage is negligible, but across multiple packages with shared dependencies, having a lockfile improves the ability to do parallel package downloads and gives your system knowledge of which dependencies are shared much earlier in the process.

- Auditability/visibility. If you check your lockfile in and someone else is having problems reproducing a bug, being able to diff lockfiles makes it very easy to see if your environments differ. Since the full tree is serialized in a predictable place, it's easy to diagnose situations like "you're using a different version of package X, which is causing weird behavior because it's depended on by a package we use". That semver's fault; people put '>' without '<' in semver strings all the time, and that's no more a failing of semver than misspelings are a failing of your text editor. Lockfiles also make it easy to identify the opposite scenario: "our installations are identical; must be something else on your system that's causing the problem--maybe check your environment/kernel rev?"

Edits: the soul of wit.


Really glad to see the change of course on this issue.


I just finished reading the actual proposal. I am very happy to see this addressed in Go. Every time I've pitched Go the versioning has been a sticking point, despite the many great features of Go.

A potential problem I see with this proposal though is that when I verify everything works, and is secure, with version 2 of a package I am automatically upgraded to version 2.1 because the import path is 'my/thing/v2/sub/package' which will grab 2.1 when it's available, if I am understanding it right. At that point I am using a dependency I have not run through my additional checks (e.g., static analysis), without knowing it. This is complicated further by the proposal's suggestion that only the major version be allowed in semantic import paths. How then can I fix the version to exactly what I want, with exactly the amount of flexibility I want?

I am not sure, but I think that this issue is not completely addressed by the concept of a 'high-fidelity' build, as discussed in the proposal. If the minimally compatible version based on dependencies is 2.1, but I want to use 2.2 then the high-fidelity build won't do as it would be stuck at 2.1.

It is significantly more complex to handle, but I'd much rather see something along the lines of 'my/thing/$v/2/1#sub.package'. This also lends itself to things like 'my/thing/$v/latest#sub.package', 'my/thing/$v/2/latest#sub.package', and 'my/thing/$v/2/0#sub.package'.

A less complex approach might be to mandate major, minor, and patch version numbers in the semantic import path, but allow 'x' to be used where the author just wants the latest. Examples would then be: 'my/thing/v2.1.1/sub/package'. This also lends itself to things like 'my/thing/v2.1.x/sub/package', 'my/thing/v2.x.x/sub/package', and 'my/thing/v2.0.x/sub/package'

I'll leave you with what I think is a relevant quote from Cool URIs Don't Change, by Tim Berners-Lee.

"It is the the duty of a Webmaster to allocate URIs which you will be able to stand by in 2 years, in 20 years, in 200 years. This needs thought, and organization, and commitment.

URIs change when there is some information in them which changes. It is critical how you design them. (What, design a URI? I have to design URIs? Yes, you have to think about it.). Designing mostly means leaving information out.

The creation date of the document - the date the URI is issued - is one thing which will not change. It is very useful for separating requests which use a new system from those which use an old system. That is one thing with which it is good to start a URI. If a document is in any way dated, even though it will be of interest for generations, then the date is a good starter.

The only exception is a page which is deliberately a "latest" page for, for example, the whole organization or a large part of it.

http://www.pathfinder.com/money/moneydaily/latest/

is the latest "Money daily" column in "Money" magazine. The main reason for not needing the date in this URI is that there is no reason for the persistence of the URI to outlast the magazine. The concept of "today's Money" vanishes if Money goes out of production. If you want to link to the content, you would link to it where it appears separately in the archives as http://www.pathfinder.com/money/moneydaily/1998/981212.money...

-- Cool URIs Don't Change, by Tim Berners-Lee


Not sure you are understanding the proposal completely. You are never automatically upgraded to a version, semantic or otherwise. The only time vgo "chooses" a version is when you first bring it into a project with `vgo get`, then it assumes you want to start with the latest major/minor/patch version.

From then on, you will be "pinned" to that version unless you explicitly choose to upgrade it via an update to your mod.go or a `vgo get -u`

As to putting even MORE in the import paths, I think the idea fits in with Tim Berners-Lee, just that non-major versions are like "edits" to the URLs rather than fundamental changes to the the content of a page. IE, you don't rename the URL in your case when you make a spelling mistake, same as not "renaming" your imports when you are just bringing in a patch release.


Ah, ok. Thank you for clarifying. In that case there is nothing to not look forward to, and hopefully the proposal will be fully adopted soon.


The module spec lets you put the minimum version in it; so the scenario you raise in your second paragraph would not happen. The import paths are just there to allow you to have both v1 and v2 in the same project (ideally during a migration period).

You would have to raise the minimum version in your module spec to get a newer version of the dependency.


How has go not solved this when it's had npm as an example for 6 years?


It’s explained in the post. To summarize: the Go team felt it would be best if users developed the tooling to solve the problem rather than having an official solution. This resulted in too many solutions, which is itself a non-solution.

Pretty clearly this was a mistake by the Go team, and the blog post says as much.


In short: they follow the Rust's approach. Great!

Update: no, I'm wrong.


I love Rust.. but isn't this approach very different from Rust? Rust doesn't even have the concept of import paths, which is almost entirely how Go is doing versioning here.

Sure, they have verifiable builds, but even JavaScript (Yarn/etc) has had that for years.


The approach is very different from Rust because of the minimum version selection. Instead of getting the latest version of a dependency it tries to get the oldest and this happens for transitive (dependencies of dependencies) where you have less control on asking for updates.


Did you even read the linked article, his seven part blog post series on vgo [1] and the proposal [2]?

[1] https://research.swtch.com/vgo

[2] https://github.com/golang/proposal/blob/master/design/24301-...


That is inaccurate. In short, Russ Cox has been bashing cargo's approach of preferring newer versions and lockfiles and a SAT solver (or really an approximation of one).

Apparently making the dependency-manager's code simpler is a goal worth removing developer's ability to be expressive and removing security updates.


> Apparently making the dependency-manager's code simpler is a goal

That isn't the goal at all. The goal of Minimal Version Selection is to ensure that the version selected is closest to that which was tested by the app and its dependencies.

> worth removing developer's ability to be expressive and removing security updates.

And that isn't true either. It allows the developer of both the app and its dependencies to express the version they have tested against, and still allow the developer to receive security updates by explicitly increasing the version, allowing them to test that the update doesn't create problems while doing so.

It allows for version pinning without the need for a lock file.


I'd be surprised if the Go developers ever care to do this right. Google doesn't use any of this, and all their projects do extensive vendoring with heavy modifications (effectively forking them).

Personally, I despise their approach to developing software, and on principle, I won't really develop software in Go because their attitude permeates through language and software development mechanisms.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: