Jay Taylor's notes

back to listing index

Codebase Refactoring with Go | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: golang go refactoring news.ycombinator.com
Clipped on: 2017-04-22

Image (Asset 1/2) alt=
Image (Asset 2/2) alt=
I think the proposed alias has true merit in Go as has been shown by Russ Cox and in other places. What I'm a bit confused about is why it's sold as a transitionary tool but implemented as a feature.

If it's so easy and clean to use it may become a misused feature, causing developers to write "temporary" code that just lingers around forever because it's more convenient to let it be than to actually move to the new API. I'd draw parallels with imports in Go versus imports in Python. I would wager that a great majority of Python repos out there have lingering imports in some module that doesn't need to be there. But in Go I can guarantee that there are 0% unnecessary imports.

For example, the declaration could be obvious that this is for compatibility as opposed to a feature:

legacy const FUN_THINGS => funner.Things

Also, when compiling a package using the FUN_THINGS constant you'd be warned:

./main.go:10: using legacy alias: "clowns.FUN_THINGS" has moved to "funner.Things"

These small things would help prod developers to actually fulfill the transition as opposed to letting it linger.


So far, Go doesn't have warnings, which is great. Either it's an error, or it's nothing.

This is great IMO, because I've seen too many codebases that just end up having infinite amounts of warnings that never get fixed, making them (1) useless and (2) annoying.

(Extraneous imports is a great example of something that in most languages would probably be a warning (or nothing); I remember in the early days people bitched like crazy about it, but on balance it's really really nice. (goimports also removed a lot of the pain.))

I think it's clear that if this goes in, people will use it beyond just refactoring. Honestly, I'm ok with that, if it's restricted to just types.

Here's something I've done from time to time:

func now() time.Time { return time.Now(); }

Just because typing time.Now() everywhere was getting tedious.

Having a type that's an explicit alias doesn't seem too bad to me.

type Time = time.Time

Sure, it's an extra step of indirection, but a very easy one.

(I feel like having variable aliases would be much more confusing, because variables are mutable.)

   ===
And it's not like the situation is that different from right now. For example, maybe I have some package with an api like:

    func DateMagic(t time.Time) time.Time
OK, super-obvious what it's taking.

Now maybe I'm using aliases and you see:

    func DateMagic(t Time) Time
Hmm, not quite as obvious, now you have to go lookup

    type Time = time.Time
But still today I can do

    func DateMagic(t x.Time) x.Time
Hmm what's x?

Oh

    import x "time"
Of course it could get a little pathalogical...

    func DateMagic(t DumbName) DumbName

    type DumbName = time.Time
but I feel like you're really trying there.


I don't think realiasing exported signatures is a good idea, in any language. However, doing what you do in your package as a private function is perfectly fine, that is for your own convenience.

Regardless, I don't think the argument that the Go maintainers are making is that this is about number of characters saved. The point of an alias in the first place is to make it more feasible to carry out breaking API changes (which are an important part of any maturing API).

So if aliases are introduced for the purposes of making API transitions possible over multiple commits, then I think they should be clearly designed as such. Which means that it shouldn't tempt people to do what you just described in your comment. Maybe that wouldn't be a bad thing, but if that becomes the ultimate use for the aliases, that use case should play a bigger role in the design of aliases.

Because right now, as far as I can tell, they're being designed mainly for API transitions.


Well, I could easily have made it "func Now" or whatever in my current package.

The point is, adding type alias just gives you the ability to do for types what you can already do for variables and functions.

It's true that the driving case (as presented by rsc) is co-reorganizations, but I think inevitably it'll get used for more than that. (Which is what many people who are opposed are afraid of.)

I would not personally be a fan of making the feature so deliberately painful to "discourage" use. Either add it and make it as nice as you can, or don't. Why add ugly things to your language on purpose? :)


You can use

    var now = time.Now


true, tho slight savings in characters not worth the less-clear definition imo


On the other hand, you may want to set `now` to return a deterministic time value when testing, which the variable allows.


Is that really less clear?


I think so -- if I see this it's obvious now is a function.

    func now() time.Time { return time.Now() } 
If I see this:

    var now = time.Time
My first instinct is, "hmmm, some kind of variable?" And then I have to remember, "oh yeah, it's just a variable that holds a function."

Obviously not a big deal either way.

(I think this further shows how type aliases would not be adding much possibility for confusion compared to what already exists in the language.)


btw if you want you can explicit mention the type in the var declaration


Aliases weren't only proposed as a way to implement gradual repairs (and the proposal, also, is for "a way to implement gradual repair", not so much "aliases").

There are several non-transitionary use cases given as a justification for aliases, when they where initially proposed. Examples are a) implementing the protobuf "public import" feature in generated code, b) providing drop-in extension packages for other people's code (for example golang.org/x/image/draw is an extension of image/draw) and c) exposing APIs from internal packages selectively. Personally, at least b) is definitely something I would use them for.

It's just that a lot of people didn't clearly understand what the refactoring problem is, that aliases would solve ("just use a tool for rewriting things" was a common response, or "use versions") and how aliases solve it. So, this proposal is taking a step back; instead of necessarily pushing aliases, it now describes the refactoring problem that aliases solve and seeks a) consensus that this is a problem that go should solve systematically and b) what the mechanism is, that it should solve them with. Aliases are one proposed way.

That's why I would be really annoyed, if something like compiler warnings or so would be introduced when you use them; if go gets aliases, I'd consider it stupid not to use them for all the other things that they are a good solution for. And I would use them as such and if people complained about compiler warnings, tell them to complain to the go team that they put warnings in for something that shouldn't be warned about.


These are fair points and I must've missed the elaboration on them when I looked at the proposal for aliases, as I was left with the feeling they were being pushed forward mainly to solve the gradual repair issue.

I guess I have two points to make:

1) try to make a contained solution for a contained problem because the general solution may carry with it a number of unforeseen issues

2) if you do see an opportunity to apply a general solution, exhaust every use case of that solution and rewrite the problem statement to apply to all those use cases

As for being really annoyed if there were warnings, that was exactly my point. :) Label them as something more specific (a solution to the repair problem) and make it difficult to use them for something else. That doesn't exclude a future, more generic solution – it just avoids unaccounted for problems.


> try to make a contained solution for a contained problem because the general solution may carry with it a number of unforeseen issues

Orthogonality is one of the key design goals of go. That means a) keep the intersections of use cases of features low and b) maximize the space you can span with them.

Having "contained solutions" will inevitably lead to a more complicated language (infinitely many contained problems with a contained solution each means diverging number of contained solutions) and won't span a large problem space by definition. You want to solve as many problems as possible with as little features as possible.

> As for being really annoyed if there were warnings, that was exactly my point. :) Label them as something more specific (a solution to the repair problem) and make it difficult to use them for something else.

You misunderstood my point. You are not making it difficult, you are simply making it annoying (to both me and my users). Warnings would have exactly zero influence on how much and for what I would use aliases in practice (thus totally failing your goal), they would just make me hate the person who proposed them (dunno. Maybe that's a plus for you? Doesn't seem like that to me).

Solving many problems with one solution is not a bug, it's a feature. Interfaces do not only solve one problem. Embedding doesn't only solve one problem. A language feature that only has one specific use case is just a wart.

For example: python's "pass" keyword is an incredible wart. It's only use is, to make the language unambiguous to the parser, but it doesn't actually serve any purpose. At the same time, you are putting in a keyword; no one will ever be able to use that as an identifier. The same, actually, goes for a lot of other features of python. It's pretty much the epitome of insular features:

http://blog.labix.org/2012/06/26/less-is-more-and-is-not-alw...


>I would wager that a great majority of Python repos out there have lingering imports in some module that doesn't need to be there. But in Go I can guarantee that there are 0% unnecessary imports.

With a linter you can do that with python too.


Certainly, and I can also type my JavaScript. But it doesn't happen more than maybe 1% of the time because optional good practices usually fall by the wayside.

This is why Go's simple but strict rules are so key to keeping it clean. Again, I'm not opposed to the alias solution, I just don't think it should be added as a tool with optional good practices accompanying the design spec (e.g., "use it to transition an API and then remove the old aliases").

In his article, Russ argues mainly for the breaking API case, but ends with that general aliases (what was proposed for 1.8) are a promising solution. I think otherwise – a specific use case shouldn't necessarily be seen as an opportunity to apply a generic solution.


>Certainly, and I can also type my JavaScript.

Running a linter that catches just unused imports is a one line command (30 seconds), whereas statically typing your javascript essentially requires a full rewrite (weeks or months of work).

>This is why Go's simple but strict rules are so key to keeping it clean.

There's value in being able to "dial up" the cleanliness as and when its needed. It's often a waste of time to focus on cleanliness of code that you're not sure is going to last. I wouldn't want unused imports to cause a compiler failure even though I run the linter and use that rule regularly.


That's just evading my point. Go doesn't dial up its cleanliness, it's just clean. And it already does cause a compiler failure if you have unused imports or variables.


I think it's fantastic that this problem has people's attention.

I have long wished that code changes to a repository could be accompanied by code refactorings that are intended to be applied to code using that repository. For example, if you rename f() to g(), then you could accompany this by a refactoring that transforms existing callers of f() to use g() as well. I'd envision this as a build step that tells you that automated repairs are available.

The refactoring could be a small but limited program of its own, that is evaluated against the program abstract syntax graph, and that can be as powerful as is warranted or needed to properly transform programs using the code. Moving or renaming code could be a relatively simple type of refactoring. However, if you've renamed f(int) to g(int, int) such that callers of f(N) should call g(N, 0), then a slightly more complex refactoring script could handle that too. I would think of these refactoring scripts as something like how Git treats changes during a rebase: if you are far behind you might need to apply multiple of them to your code base in sequence to bring code up to date.

The article points out an important need for gradual repair to be possible. Along with a way to express that transition backwards-compatibly for a period of time, an automated way to apply the refactoring steps could make adoption even easier. In this fashion, refactoring and improvements could have far lower costs for libraries, APIs, etc. than they do in today's languages.


> The refactoring could be a small but limited program of its own, that is evaluated against the program abstract syntax graph, and that can be as powerful as is warranted or needed to properly transform programs using the code. Moving or renaming code could be a relatively simple type of refactoring. However, if you've renamed f(int) to g(int, int) such that callers of f(N) should call g(N, 0), then a slightly more complex refactoring script could handle that too.

Go has such a tool already in the form of `gofmt -r`. You give it a semantic rewrite rule like `a[b:len(a)] -> a[b:]` and it runs through the AST and rewrites the code for you. When go was in development (pre go 1.0) the developers used this mechanism quite often to make backwards incompatible changes to the language.


I have long wished that code changes to a repository could be accompanied by code refactorings that are intended to be applied to code using that repository

This was entirely feasible in Smalltalk. Not sure if people ever applied this outside the context of ORMs/database mapping. (Some shops did something like what you are describing in that limited context.) A short script that applied a specific refactoring could be placed in a specially named method on a particular class, for example. (Or some other mechanism could be used to store metadata on what versions the refactoring was applicable to.) It was also possible to quickly hack a GUI tool that you could bring up to selectively apply the refactorings.

Also, after the advent of "The Refactoring Browser" the parser engine was also used to build the refactorings directly built into the "IDE." (It's actually where the parser engine came from.)

(Not only did this rewrite engine have full syntactic power, with all values wild-carded, one could also script against values in the parse tree or provide the values from snippets of code.)


I think they mean the "refactoring" should be shipped as part of the release to be applied by codebases depending on the library, possibly automatically or semi-automatically, it's not about a formal refactoring of the "current" codebase which most modern IDEs can do and which is pretty easy in statically typed languages.

Though what they're talking about would generally be considered an API change not a refactoring.


I think they mean the "refactoring" should be shipped as part of the release to be applied by codebases depending on the library, possibly automatically or semi-automatically

That's exactly what I'm talking about! For example, in the StORE version control that comes with VisualWorks Smalltalk, it would be fairly easy to detect the presence of a particular class and method, whose name contains metadata about version, then pop up a window showing the potential refactorings.

The go equivalent would be to have a dist-refactorings subdirectory, and a go tool. Maybe dist-refactor?

   go dist-refactor ...
The refactoring scripts would have names that contain version information. Invocation of dist-refactor would cause a list of available refactoring scripts to be applied to the codebase from all of the dependencies that are at a version later than the one compiled. Alternatively, one could invoke

   go dist-refactor [library]


I think you're looking for the gofmt [0] rewrite rules. E.g. [1].

[0]: https://golang.org/cmd/gofmt/

[1]: https://fallthrough.io/2015/08/gofmt-rewrite-rules/


As others pointed out, Go actually has some tooling for that, in the form of "go fmt" and "go fix".

Still, it only works for refactorings simple enough to be applied automatically, which on the other hand are the simplest to maintain compatibility with (writing a stub with the old name/arguments that calls the new function).


There is one such tool for C: coccinelle http://coccinelle.lip6.fr/


This should extend beyond code and cover data, too. If you replace a bool field in a class by an enum, for instance, and then read in (or, for systems with a REPL, page in; if your system has paged out data, paging in every page to check it for the presence of to-be-migrated objects on every structural change to any class makes making changes too slow) a file that serialized the bool values, they should automatically be upgraded to use the enum.

There was/is tons of work on that, often using Common Lisp and the meta-object protocol because of its excellent reflection capabilities and change-class ()

I can't find a good reference, but http://plob.sourceforge.net, https://common-lisp.net/project/elephant/ and http://www.adoc-metis.com/wp/wp-content/uploads/2013/04/elw0... may be starting points for learning more about this.


> if you rename f() to g(), then you could accompany this by a refactoring that transforms existing callers of f() to use g()

Decent IDEs can do that, and clang's interested in that space as well: http://eli.thegreenplace.net/2014/07/29/ast-matchers-and-cla...


> an automated way to apply the refactoring steps could make refactoring a breeze

It is called alias, a feature that should have been there a long time ago but "thanks" to a minority of gophers it's still not there. Go fundamental design issue is that it conflates namespaces with urls. It's not something that can be fixed by a third party tool.


> Go fundamental design issue is that it conflates namespaces with urls.

No, there is nothing in the design of the Go language that conflates namespaces with URLs. Import paths are just strings. They are interpreted as directories under $GOPATH or ./vendor on the file system by the current Go compiler from golang.org.

One tool called 'go get', which helps to put Go source code files into these directories, does interpret them as URLs. But other implementations of the Go specification like gccgo don't even have the 'go' tool (and thus no 'go get').

Import paths don't even have to be interpreted as directory paths. They could be interpreted e.g. as database queries by other Go spec implementations.


Probably more importantly, for all the criticism this recieves, the very same people usually reinvent the same system in other languages.


I'm not a Go user myself, and my comment is about programming in general rather than Go specifically (though Go seems to be leading the way in this area).

Does alias help Go programs automatically transform themselves? Alias seems to provide direct compatibility between types, but doesn't seem to provide any way to automatically refactor code based on the alias, does it? https://github.com/golang/go/issues/16339

Basically what I'm thinking is something like an alias feature where when A has been renamed to B, such that the name A has been left for backwards compatibility as an alias to B, then an automation layer could also help by automatically offering to rename A to B in existing code.


I think the reason it's not normally done is that for small open source projects, there are only a handful of usages and each author can easily do it manually. For a large monorepo, we do have (or are building) specialized search-and-replace tools to replace deprecated API usages with usages of the new API.

To scale this in the open source world, perhaps someone could search Github for Go projects and send out pull requests automatically. But people writing non-opensource code would still be pretty much on their own.


> To scale this in the open source world, perhaps someone could search Github for Go projects and send out pull requests automatically.

This would be amazing for the open source community and people depending on it.


Thinking really blue-sky:

I'm imagining the refactoring rules as something that could perhaps ship alongside the library as part of its development release. Like, let's say we've released the latest version of our library which renames A to B. The release could include a refactoring rule that describes how to update from previous versions of the library to the latest version: a rule that identifies all references to A and replaces them with B. References that tooling cannot identify and fix can also still work for a time via the alias feature.

To make these rules easy to author, perhaps we could provide some form of assistance at the source control layer -- perhaps we could infer and propose refactorings based on the diffs that we see. If we see a diff renaming the method A to B, then we could see that and offer to construct a refactoring rule for all consuming code. Or with full IDE support, the IDE refactoring tool could call into the language service refactoring function to construct the rule.

To refactor in this language and platform, you'd rename your method, stage the commit, and then ask the language to analyze the change. It would detect the rename, offer to create a refactoring rule, and then you could accept that rule and apply it to your own code base (internal usage). You could have the option to save that refactoring rule as part of the package release. It could say: when consuming code upgrades from version 7 of this library to version 8 (or commit hash xxx to yyy), apply these refactorings. Indeed, even if the library does not ship refactoring rules -- or as an alternative to that model -- users could run the inference tool on the diff of changes to the library source. Shipping refactoring rules in the release would allow this scheme to work even if users don't have access to library source, though.

Imagine our library has hundreds of consumers on GitHub. When they next pick up a version of our library, and try to build the source, our build system could inform the user that refactorings are available and offer to apply them. The user clicks "accept", reviews the changes, and hopefully the package builds and its tests pass after that.

If we make this system really reliable, then we could apply these changes as the default course of action. Ideally the user would apply the refactorings and commit the changes to their source code, but even if the user doesn't, we could apply the changes to a temporary copy every time they build. Or the compiler could do it semantically for them, depending on how the rules work. This way even users who aren't willing to actively participate can still benefit from the capability. If your consuming package falls far behind the latest library, then the set of refactorings you need to apply to your code in order to build it could grow quite large and brittle, but it may still work (A changes to B today, and is moved into another package tomorrow). Like a rebase of many commits, you'd apply the rules one by one.

The goal would be to make it really easy to ship these refactoring rules as part of releases as a library vendor, and really easy to apply the rules as a consumer. The typical change would be small quality of life changes like renames and moves.

I wonder how a capability like this could transform the way we release software. Today, releasing a breaking change is anathema. It's anathema because we know it causes massive pain for users. If we had the ability to ease that pain, and make the upgrade process really easy or even automatic, then it could perhaps significantly change the way in which we think about interface contracts. Everyone who builds libraries is familiar with a time where you got the interface wrong, and really wish you could change it, but it's too late now because the library is too entrenched. This kind of system would make it possible for us to dig ourselves out of those problems and continuously improve even codebases that are widely used. Of course I have my head in the stars at this point, but I believe that all of this is plausible.

[I'm not a Go user, but these concepts have been something I've wanted to explore as features of a pet programming language I've been designing off-and-on.]


Of course I have my head in the stars at this point, but I believe that all of this is plausible.

It's not only plausible. I know of environments where it would be easy to code-up.


And yet interface compatability/continuity would still be important unless you can automatically change all of the deployed instances without inducing any race conditions or transition issues. This can only be done through a backwards compatible layer or bifurcation of the install base. So really, nothing would be fundamentally different.


And yet interface compatability/continuity would still be important unless you can automatically change all of the deployed instances without inducing any race conditions or transition issues.

The VisualWorks Package system could theoretically be applied at runtime. It would create a shadow of the meta-level objects, then the new meta-level could be applied atomically with "shape changes" of instantiated objects happening all at once.

This could 1) still incur a pause in the program and 2) in practice, bugs surfaced preventing the widespread use of the facility in running servers.


> but "thanks" to a minority of gophers it's still not there

I don't know where you're getting this from, in almost every discussion i've seen opponents to the proposal were the overwhelming majority


I agree with this idea, and I'm glad that the Go team has come around to this. :)

Type aliases end up being very important when you have things like complicated closure types and generics. They are also useful for migration, as the article notes. It's also nice for packages to be able to reexport names from other packages: perhaps your library depends on some internal packages that you only want to expose a few types from.

One word of advice for Go developers if this is implemented: Pay attention to getting error messages in the compiler right. Ideally you want to refer to types by the name the programmer was using at the place the error occurred, not the underlying internal name. This requires hanging on to typedef information through more internal stages of the compiler than just name resolution. Clang does a great job of this with its "a.k.a." functionality.


One issue with gradual refactorings, is that such projects can be abandoned, resulting in harder to read code. Such things could accumulate, like unused imports would if Go didn't have a mechanism to prevent that.


As someone who drank the Golang koolaid this proposal kind of infuriates me.

Time after time, i've had to proselytize that Go is a good language precisely because of its simplicity and restrictions. Simple, explicit, easy to write, easy to understand, and most importantly easy to maintain code is the payoff for not having generics, not having syntax niceties, limited meta-programming, having to type out boilerplate, etc. etc.

We've been continually told that our use cases for missing features are just opportunities to explore the existing tooling to solve our problems, and to be honest it really hasn't been terrible advice. In fact, i've used the existing go tooling myself at work to do semi-large scale refactoring of the kind that aliases are supposed to solve. It wasn't terrible, and the solution ended up being rather clear and straightforward once I got over my aversion to code generation.

We've also been told that we need to program better to get the most out of Go... use interfaces, version APIs and include that versioning in import paths, separate concerns religiously, almost fanatically. If we just follow these rules then we'll experience Go nirvana and all will be well.

And yet when it comes time to practice what they preach, instead of having to take the same path as everyone else, Google just throws its weight around as de facto owners of the language and adds a feature to make their lives easier. A feature that doesn't even apply to most users of the language (the original argument for type aliases was for refactoring of extremely large codebases). Type aliasing seems like a huge outlier in the KISS philosophy of Go, and regardless of syntax or technical concerns the fact that Google can on a whim break all of their previous Ivory Tower positions that us plebes have had to deal with doesn't sit well with me.

Oh, and lets not forget that there was some sketchiness about the technical reasons for wanting type aliases at the beginning (we were told it was absolutely necessary for either something in the Context package or in k8s, i don't remember precisely) which magically resolved itself without aliases, and the very first PR (for some experimental GUI package) using aliases was an example of exactly the kind of poor practices that aliases allow, which we were assured was never going to happen because Google would only use aliases in the std lib if absolutely necessary.

I guess I shouldn't be surprised at this, but I'm pretty disappointed that it's already happening when we're still in 1.0. If the Go maintainers truly want Go to continue it's current trajectory and not turn into the next overwrought Big Co language they'll have to be as resistant to change from the inside as they are to change from the outside.

Edit: reworded first pp


> limited meta-programming,

Go reflection is one of the most complicated reflection I've ever seen in any language. Go's / C interop is one of the most convoluted implementation in existance. To pretend that Go is simple at first place is a lie. Go is not simple, it has a limited syntax, which is quite not the same. Go will evolve, whether you like it or not, even if it takes a whole new generation of Go developers and maintainers to get there.


This is what I keep saying, history has proven that languages that eschew complexity and happen to enjoy mainstream adoption, end up turning into the thing they were fighting against.


I left C++ because of complexity; I don't want it in Go.


> We've been continually told that our use cases for missing features are just opportunities to explore the existing tooling to solve our problems

The proposal pretty clearly explains, why tooling isn't a solution.

> And yet when it comes time to practice what they preach, instead of having to take the same path as everyone else, Google just throws its weight around as de facto owners of the language and adds a feature to make their lives easier.

I tried debunking this before. Google really doesn't need this. It makes their live easier, yes, but they can do without. Most people can not do without, because they do not develop in a Monorepo, which is a strict requirement to be able to solve these problems at all.

I'm not saying that this is purely altruistic, it definitely makes things easier for people at Google. But the meme that this is bad for everyone and good only for Google is just plain wrong; it makes things easier for Google and possible for everyone else.

> A feature that doesn't even apply to most users of the language (the original argument for type aliases was for refactoring of extremely large codebases)

The repository of all open source code is an extremely large codebase (it's larger than even Google's internal code base) that'll profit immensely from aliases and contains probably a majority of go users.

> Oh, and lets not forget that there was some sketchiness about the technical reasons for wanting type aliases at the beginning (we were told it was absolutely necessary for either something in the Context package or in k8s, i don't remember precisely) which magically resolved itself without aliases

I don't know where you are getting your information, but nothing has resolved itself so far, much less magically. Most central users of context are still (after almost four months) not using the stdlib context package, because this situation isn't resolved yet, see e.g. https://github.com/grpc/grpc-go/issues/711 and the issues linked from there.

> and the very first PR (for some experimental GUI package) using aliases was an example of exactly the kind of poor practices that aliases allow, which we were assured was never going to happen because Google would only use aliases in the std lib if absolutely necessary.

It wasn't an experimental GUI package, it was golang.org/x/image/draw (specifically https://go-review.googlesource.com/#/c/32145/), which is serving as a drop-in replacement of image/draw. And I don't believe it's at all clear-cut that this is a "poor practice"; it is an unforeseen use case, yes, but it's also a very practical one. Being able to augment other people's packages with a drop-in wrapper like that will encourage code-reuse, as it gets easier to re-package other people's code in a better API. Crappy APIs sure as hell are the main reason I don't use much third-party code and I am very excited if it becomes possible to provide these kinds of wrappers. The technical merits of this, btw, also where clearly documented. (Also, nit: golang.org/x/image/draw isn't in the stdlib)

> I guess I shouldn't be surprised at this, but I'm pretty disappointed that it's already happening when we're still in 1.0.

nit: We haven't been in 1.0 for 3½ years.

There is another way to frame this (and other issues) that I consider closer to the truth (though it's of course less anti-Corporation): Which is that Google's size (or rather it's holistic view on a codebase of that size) allows it to better predict the needs of the industry and community as a whole. It is easily illustrated with go as a whole. You can see it as "Google needed their own language, so they developed go, ignoring what everyone else wanted". But that view just doesn't hold up with reality. If that was the case, they wouldn't have open sourced it (and internal adoption would be much higher than it is). But it makes more sense, when viewed from a different angle: They saw where computing and software development is headed, that more stuff will run in the cloud in the future and that work is going to get scaled with networked services, instead of beefier machines. So they developed a language to better support those use cases. Yes, they also profited personally, because those use cases are theirs. But they predicted that other companies and the industry as a whole is going to move in that direction too, so they open sourced it. And so far, go's more than decent adoption in the cloud space does seem to indicate that their prediction is right. And remember how go was perceived in the beginning (and in part still is) by the PL community: As a dinosaur language from the 80s that isn't supposed to be taken seriously. No one, at the time, would have expected there to be a market for this.

It's similar with aliases. Yes, they make the life of Googlers simpler. But, in the end, there needs to be a language-agnostic good tool internally anyway, as there are a lot of production languages there. But they noticed that this is a problem they are having, so they want to built a solution, because they anticipate that other people will have similar problems in the future even if they don't see it yet.


I really like Go. It's a good language.


You can learn more about Golang here https://www.livecoding.tv/learn/golang/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: