Jay Taylor's notes

back to listing index

Toward Go 2 | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: generics golang go news.ycombinator.com
Clipped on: 2017-07-14

Image (Asset 1/2) alt=


Image (Asset 2/2) alt=


The paragraph I was looking for is this:

> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.

This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues. I wonder if Go's historical antipathy towards this feature has driven away most of the people who would want it, or if there is still enough latent desire for generics that serious Go users will be able to produce the necessary mountain of real-world use cases to get something going here.



I'm not even a Go developer, I just played with it a bit a couple of years ago and used it for a small one-off internal API thing, and I can think of a dozen real-world use cases for generics off the top of my head.

  * type-safe containers (linked lists, trees, etc.)
  * higher order functions (map, reduce, filter, etc.)
  * database adapters (i.e. something like `sql.NullColumn<T>` instead of half a dozen variations on `sql.NullWhatever`)
  * 99% of functions that accept or return `interface{}`
I appreciate that they seem open to the idea for v2, but "the use case isn't clear" is absurdly out of touch with the community. Using `interface{}` to temporarily escape the type system and copy/pasting functions just to adjust the signature a bit were among the first go idioms I learned, and they both exist solely to work around the lack of generics.

It's generally the opinion of the Go community that map, reduce and filter are bad ideas due to how easily they are abused. A for loop gets the job done easily enough. If you've ever worked with data scientists working with Python, you'll quite often see them all chained together, probably with some other list comprehensions thrown in until it becomes one incomprehensible line.

If that's the case, then the Go community is wrong.

For loops do not get the job done easily enough. I've lost count of the number of times I've had to do contortions in order to count backwards inclusive down to zero with an unsigned int. With a proper iterator API, it's trivial.

Furthermore, for loops are a pain to optimize. They encourage use of indices everywhere, which results in heroic efforts needed to eliminate bounds checks, effort that is largely unnecessary with higher level iterators. Detecting the loop trip count is a pain, because the loop test is reevaluated over and over, and the syntax encourages complicated loop tests (for example, fetching the length of a vector over and over instead of caching it). For loop trip count detection is one of the major reasons why signed overflow is undefined in C, and it's a completely self-inflicted wound.

I'm generally of the opinion nowadays that adding a C-style for loop to a language is a design mistake.


It's also strange that a language that wants to encourage parallelism requires for loops, and specifies that they always run sequentially. Java 8 can put map/reduce with a thread pool in the standard library precisely because it doesn't use a for loop; imagine how tedious and repetitive the Go version would be: https://docs.oracle.com/javase/tutorial/collections/streams/...

To be fair, I don't think Go prioritizes parallelism as much as it does concurrency.

> For loops do not get the job done easily enough.

There was a nice paper at this years POPL which (in my opinion) allows you to substantiate this claim.

The paper is "Stream Fusion to Completeness", by Oleg Kiselyov, Aggelos Biboudis, Nick Palladinos, and Yannis Smaragdakis. The actual question in the paper is how to compile away stream operations to imperative programs using for/while loops and temporary variables. On the other hand, if we look at the output of the compiler we can see how complex it is to express natural stream programs using only for/while loops.

For instance, even very simple combinations of map/fold/filter create while loops and make it difficult to detect the loop trip count afterwards (in fact, you need an analysis just to detect that the loop terminates). If you combine several such streams with zip, the resulting code makes a nice entry for obfuscated code contests. Finally, if you use flatMap the resulting control flow doesn't fit into a for/while loop at all...

So for/while loops are not simpler than stream processing primitives and unless you explicitly introduce temporaries (i.e., inline the definition of map/fold/etc.) you quickly end up with very complicated control flow.


This all depends on what "the job" being done well enough is. You can't use a research paper to refute the experience of the many programmers who successfully use for loops to get their work done. That's a statement about usability, not expressiveness.

If you want to show something else is easier to use, you'd have to do a user study, and even that's not going to be universally applicable since it depends on the previous experiences of the user population being tested. It's why these things tends to be debated as a matter of taste.


Interestingly there are user study, mainly on kids.

They all show that the natural expression is through declarative programming and that for loop are never what comes naturally.


Interesting. Links?

Yes, people can get crazy with inline anonymous function chaining/composition, and that can quickly get out of hand in terms of maintainability and readability, but deeply nested imperative loops is often much, much worse to debug and understand, because the intermediate steps are not nearly as explicit as in a functional chain/composition that simply takes data and returns data at every step.

Regardless, these are simply cases of people writing bad code, and nobody is claiming map/reduce/filter is a panacea for bad code.

Functional composition/chaining works best with small, well-named single purpose functions that compose/chain together into more complex functionality (with appropriate names at every non-trivial level of chaining/composition). You can't easily compose/chain imperative loops this way (at least not without wrapping them in functions that take data, and returns transformed data, by which point you might as well use map/reduce/filter to transform the data to begin with to get rid of the impedance mismatch).


> It's generally the opinion of the Go community that map, reduce and filter are bad ideas due to how easily they are abused.

There is a germ of truth here - sometimes list comprehensions make code harder to understand than if you just wrote an if statement or a loop - but you've overgeneralized. I don't think anyone on the core Go team seriously thinks that map and reduce are bad ideas when Google relies so heavily on MapReduce (and its successors).


> If you've ever worked with data scientists working with Python, you'll quite often see them all chained together, probably with some other list comprehensions thrown in until it becomes one incomprehensible line.

It's not incomprehensible, it's just phrased a different way from what you're used to.

A lot of programming boils down to mutating local or global state, by executing lines of code -- each of which mutating the contents of some variable somewhere -- in order, one at a time. But this is just one style of programming, and is not the only one. When using map, filter, reduce etc. we, instead, construct a path through which data flows, where each line modifies some aspect of the data, as it flows through the path. This is essentially what Functional Reactive Programming is, although you can get the same effect using a pure language and function composition.


> It's not incomprehensible, it's just phrased a different way from what you're used to.

What would be incomprehensible?


Code that's badly written.

As opposed to code that just uses chained map, reduce etc as opposed to nested or sequential for loops.

One is a difference is skill, clarity etc, the other is a difference in style (imperative vs fp).


"Ability to abuse" isn't a good criteria for language design. I've seen plenty of abuse of Go channels, but I'm not going to make the argument you should remove them.

Python's comprehensions are among the most powerful, easy to use, concise capabilities of any language I've used. You can address the abuse issue with coding guidelines: no more than 2 deep. Done.


The same argument applies, mutatis mutandis, to imperative iteration constructs such as range.

>It's generally the opinion of the Go community that map, reduce and filter are bad ideas due to how easily they are abused.

How easily are they abused?

Because it's the opinion of the programming community, including the brightest programmers out there, that FP, and map, reduce and filter are totally fine and dandy.


I would also love Result<Err,MyThing> in as a return type in place of (err, myThing)

Yeah, using a tuple for what clearly should be a disjoint union is infuriating. I really don't buy the gopher argument that 'it is hard to understand'.

> I'm not even a Go developer

> Using `interface{}` to temporarily escape the type system

tbh if you program Go correctly, you use interface{} pretty rarely.


> tbh if you program Go correctly, you use interface{} pretty rarely.

It's a bit like saying "if you program C correctly, you use (void *) pretty rarely.". that's not the case, Go maintainers themselves keep on adding API with interface {} everywhere. Are you saying they are not programming Go correctly?


The request for use cases in Go seems a bit like begging the question to me. Since Go doesn't have generics, anything designed in Go will necessarily take this into account and design around this lack. So it's relatively easy to show that Go doesn't have a compelling use case for generics, since the designs implemented in Go wouldn't (usually) benefit from generics!

Rust has generics and traits/typeclasses, and the result is my Rust code uses those features extensively and the presence of those features greatly influences the designs. Similarly, when I write Java, I design with inheritance in mind. I would have trouble showing real world use cases for inheritance in my Rust code, because Rust doesn't have inheritance and so the designs don't use inheritance-based patterns.

Essentially, how does one provide real world evidence for the utility of something that's only hypothetical? You can't write working programs in hypothetical Go 2-with-generics, so examples are always going to be hypothetical or drawn from other languages where those tools do exist.


It reminds me of the old urban planning line "You can't decide on the necessity of a bridge based on the number of people who swim across the river".

Users who have heavy need for genetics have already moved away from Go.


This suggests that both generics and inheritance are unnecessary.

Assemblies over the ages denote that so are loops, functions[0] and named variables, you can do everything using addresses, offsets and jumps.

They sure are useful to readability and maintainability.

[0] which can obviously be used to replace looping constructs anyway


Thanks to Turing equivalence, all programming languages are unnecessary. We should all go back to writing machine code.

It only suggests you can't easily give an example because the language is forcing a design where such things aren't needed. Sort of like linguistic relativity.

Which still proves the parent's point: it's not necessary. The question is what absolutely can't be done without them (probably nothing) so a better question is how much design/engineering/test time could be saved with them?

On the latter part I'm fairly cynical these days, since I'm presently on a team where the lead embraced a Scala-DSL heavy, functional design and the net result has been a total loss of project velocity because when we need to prototype a new requirement, the push back has been paraphrasing "oh, could you not do that in <core product> and handle it somewhere else?" - so we end up with a bunch of shell scripts cobbled together to do so.


> what absolutely can't be done without them

Of course nothing can't be done without them. With or without generics or inheritance the language is still Turing complete.


And if you look at C you'll see that interfaces and struct methods are also unnecessary, GC is also unnecessary, bound-checked arrays are also unnecessary.

The question is, do you want to write type safe code? which is memory safe? which has bound-checked array? or not? Assembly makes all that stuff unnecessary as well. This is not a good argument, especially when Go std lib is getting all these type unsafe API using interface {} everywhere. That is precisely what generics are for, to allow writing parametric functions (sort) or type safe containers instead of sync.Map like in the std lib.

If you care about type safety and code re-use then generic programming is a necessity. What do you think append, copy or delete are? these are generic functions. All people are asking is the ability to define their own in a type safe fashion.

Are these use cases Russ Cox don't know they exist?


If I were in the "go community" I would be pretty annoyed by this quote. I would find it dismissive of the literal years of people pointing at areas of their code that are bloated and less safe due to not having generics.

It doesn't seem to me that there's a shortage of people pointing out real-world use cases at all, and I'm looking from the outside in.


the use of the monotonic click example is also interesting because of the similar tone of responses to that issue. I think it's just the rsc way.

basically it goes like: > If google isn't screaming about it, it's not important. > If google has a workaround, then it's fixed.


As a maintainer, you're bombarded with reasonable requests (I think around 10 a day on the Go project). Part of your job is to turn down hundreds of these and pick the few that benefit the most people from a diverse set of users, and also don't break anything or extend the api surface too much. Then whatever you choose people complain vociferously. Sometimes good requests get ignored in that noise.

Choosing is hard, and while they could improve I think the Go maintners have done a pretty good job, and are willing to admit mistakes.


I haven't programmed in Go, but from what I understand, Go's explicit error handling isn't enforced by the type system, as you can leave off an `if err { ... }` check and everything still compiles. I think adding a generic result/error type (like the Result type in Rust or the Either type in Haskell) would be a pretty useful feature, as currently the error handling sits in kind of a weird place between forced handling and traditional exceptions.

I have no idea why you are being downvoted.

In Rust, this was handled the right way. The Result type forces you to at least actively dispose of the error.

In Go, it's just way too easy to leave an error unhandled.


> The Result type forces you to at least actively dispose of the error.

Sort of: by default, an unused value of the `Result` type produces a compiler warning. Like all warnings in Rust, this category of warnings can be promoted to a hard error for all users with a single line of code. And unlike other languages where compiler warnings are uniformly ignored, there is a strong culture in Rust of making sure that code compiles warning-free (largely due to the fact that the compiler is rather selective about what to issue warnings about, to help avoid useless false positives and warning spew (there is a separate project, called "clippy", for issuing more extensive warnings)).


I think you're thinking of Result<(), E>, whereas they're thinking of Result<T, E>, that is, you only get that warning if you don't care about the Ok value of the Result. If you need the value, you also need to deal with the error in some fashion.

This seems like a no-op to me. My stack of code tools always warns me of unhandled errors (insert argument about maybe the compiler should be doing it I guess?) but I've never understood how a Result type provides any real benefit over the usual error-tuple Go uses.

In both cases I have to either write a handler immediately after the call, or I'm going to fail and toss it back up to whoever called me.

Errors-should-be-values has always seemed like a bizarre statement to me. Like, fine, but they're still errors - their core feature will be that I stop what I want to do and handle them. And in that regard I much prefer exceptions (at least Python style exceptions) because most of the time I just want to throw the relevant context information up to some level of code which knows enough to make a decision on handling them. The thing I really want is an easy way to know all the ways something can fail, and what information I'm likely to get from that, so I can figure out how I want to handle them.


> In both cases I have to either write a handler immediaitely after the call, or I'm going to fail and toss it back up to whoever called me.

The main difference as I see it is that in Rust, you can't get the success value if there's an error, so you're forced by the compiler to handle any error if there is one. With tuples, you're free to just look at one element regardless of the value of the other.


In order to maintain compatibility with other interfaces, they have to return error in several places. Take bytes.Buffer, which return errors even when they are impossible. Same with http.Cookie(). It promises it will only ever be http.ErrNoCookie, but that isn't going to make static analysis tools happy.

Go's errors are really nice (if a bit repetitive) in my opinion. Type signatures enforce that you accept and acknowledge an error returned from an invocation, but leave it up to you to handle, pass along, or silently swallow.

So... Just like checked exceptions in Java?

Yes, it's somewhat similar in that the compiler checks if you have handled or chosen not to handle an error. You're not allowed to ignore it (through omission).

yeah but better because you don't break the world if you add a new exception to a public method.

Declaring that every method throws Exception doesn't break the world any more than every Go function returning an argument of type error. You're intentionally saying pretty much anything could go wrong and nobody can plan ways to recover.

Does that mean it now throws an error if you use the common punt like `foo, _ := …` and never check `_`?

No, because using `foo, _` you're explicitly saying that you don't care about the error.

No, but that's what I mean as intentionally ignoring.

If they want to "learn" about generics perhaps they can read the literature of the past 30yrs and look at how other languages have adopted those learnings: Java, C#, Haskell, OCaml and Coq.

Look, even allowing type aliases to become part of an interface declaration would be a HUGE win. You can't currently write portable arbitrary data structures without reimplementing them with a code generator. Ugh!


> If they want to "learn" about generics perhaps they can read the literature of the past 30yrs and look at how other languages have adopted those learnings: Java, C#, Haskell, OCaml and Coq.

Yeah, I find it strange how languages are trending at a glacially slow pace to having the same features that strongly typed functional programming languages have had for literally decades. It's like we're going to be using actual functional programming languages eventually but the only way it'll happen is to very slowly change what everyone is currently using in small steps.

Static types, immutability, type inference and non-null variables are popular design choices right now but they've been in functional programming languages for nearly 50 years. I'm still waiting for inductive types and pattern matching to turn up in a mainstream language and seeing them talked about like they're new concepts.


C# is adding pattern matching in the upcoming release, and to your point, people are acting like it's the new hotness.

That's not exactly surprising, the C# community has been doing that since the beginning of the language, anything not in the language is pointless academic wankery, and as soon as Microsoft announces it it's the best innovation in computing history since Microsoft was created.

Source: got to interact with the community between the C# 1.0 and 4.0 releases (2.0 added generics, 3.0 added lambdas, neither feature was considered of any use to a Productive Developer up to the day when Microsoft officially announced them).


> That's not exactly surprising, the C# community has been doing that since the beginning of the language, anything not in the language is pointless academic wankery, and as soon as Microsoft announces it it's the best innovation in computing history since Microsoft was created.

That isn't true inside Microsoft. Many of the people who work on C# are the same academic wanks that work on Scala or F#. C# has a different user base from those languages though, so they still have to be careful what they add to the language, and many language features are planned 3 or 4 versions in advance.


> That isn't true inside Microsoft.

No, that was not intended as included in "the C# community". Hell, SPJ used to work at Microsoft (he may still do, but he used to).

> the people who work on C# are the same academic wanks that work on Scala or F#

I'm sure you mean wonks, but I liked the typo.

> C# has a different user base from those languages though, so they still have to be careful what they add to the language, and many language features are planned 3 or 4 versions in advance.

I have no issue with the evolution of C# rate or otherwise, only with a number of its users.


Ugh, that was bad embarrassing typo.

C# has a specific audience, and the language designers cater to them pretty well. I really really like C# as a language, and I don't mind delayed access to certain features that I already like from other languages.

You probably have a beef with some C# users not because of their choice of language, but with the field they work in (primarily enterprise) tends to breed a certain kind of attitude that other techies don't like very much.


Clearly they finally learned the lessons of Apple. Old is busted, new is perfect.

(says a man planning to purchase his 5th Macbook Pro later this year)


It is new to C#, which is slowly catching up to Scala and F# in that regards. Mads Torgesen is good friends with Martin Odersky, in fact, when I first met Mads back in 2006 or so, they were talking about adding pattern matching to C#. C# is a much more conservative language, and it makes sense it would take a while to add.

There are good reasons to use C#, so when it gets a new feature that other languages have had for years, well, it is newsworthy.

Now when will javascript get pattern matching?


There is a paper, Pizza into Java: Translating theory into practice, from Odersky and Wadler at POPL 97 about how to mix generics, lambda and pattern matching in a Java like language.

C# and Java are slowly catching up :)


It's just become a Stage 0 proposal: https://github.com/tc39/proposal-pattern-matching

Woo hoo, can't wait. Especially with what TypeScript can do with this.

There's a proposal for it! Stage 0? Stage 1? Don't remember what stage it's in off the top of my head, but it looks promising.

I think the reason for that enthusiasm is not so much that it's the new hotness (although it is some people's first encounter with the idea), but that it's now available in a mainstream language that their employer will actually let them use (in about five years when they finally bother upgrading Visual Studio).

Surely if Go is considered mainstream so is Swift?

strange how languages are trending at a glacially slow pace

Human behaviour is strange when you expect rationality. Sour grapes is such a pervasive cognitive bias that one has to wonder why it exists since it's obviously irrational. I think it's likely that it presents a major advantage in the psychology of group cohesion.


> perhaps they can read the literature of the past 30yrs

40, nearing on 45:

> Milner, R., Morris, L., Newey, M. "A Logic for Computable Functions with reflexive and polymorphic types", Proc. Conference on Proving and Improving Programs, Arc-et-Senans (1975)


Guess what: they did.

https://news.ycombinator.com/item?id=8756683

They just weren't ready to make any of the significant tradeoffs other languages have to do in order to support them.

Whether that's a good outcome, I'm not fully sure - but there's definitely a lot of research in that area.


The basic tradeoff is monomorphization (code duplication) vs. boxing (in Go speak, interface{}). The problem with saying "well, this is a tradeoff and both sides have downsides, so we won't do anything" is that the tradeoff still exists—it's just something that manually has to be done by the programmer instead of something that the compiler can do. In Go, the monomorphization approach is done with code duplication (either manually or with go generate), while the boxing approach is done with interface{}. Adding generics to the language just automates one or both of the approaches.

The idea that the monomorphization/boxing tradeoff can be dodged by not having generics in the language is a fallacy. In Go, today, you as a programmer already have to decide which side of the tradeoff you want every time you write something that could be generic. It's just that the compiler doesn't help you at all.


> The basic tradeoff is monomorphization (code duplication) vs. boxing (in Go speak, interface{}).

And even that is not the whole story, just because you reify generics does not mean you have to monomorphise everything, Microsoft's C# compiler only monomorphises value types.

Also Go may box in other contexts than interfaces depending on escape analysis (according to https://golang.org/doc/faq#stack_or_heap).


The FAQ entry does not state that the Go compiler boxes escaping variables (it doesn't, AFAICT). How did you arrive at that conclusion?

>monomorphization

Does that mean writing a separate version of an algorithm for each type or types that you want it to handle? Like a sort for ints, a sort for floats, etc., even if the logic is otherwise the same (or almost)?

Not a PL design or theory expert, just interested.


Yes, it means specializing the code and generating a separate version of the function for each type it is instantiated with.

Got it, thanks.

In Haskell (and I believe Scala), one can use pragma hints to specify to the compiler when to specialise, if performance becomes a problem. So I don't see this as an argument against parametric polymorphism.

> The basic tradeoff is monomorphization (code duplication) vs. boxing (in Go speak, interface{}).

There is also the approach taken by Swift, where values of generic type are unboxed in memory, and reified type metadata is passed out of band. There's no mandatory monomorphization.


Really by "boxing" I mean "dictionary passing" (I've also seen the term "intensional type analysis" for it). That encompasses approaches like Java as well as those of Swift.

Hybrid approaches in which the compiler (or JIT!) decides whether to monomorphize or use vtables are also possible.


"Dictionary passing" is a good term for this implementation strategy, I haven't heard it named before. Do you know of any other languages that take a similar approach?


I thought Ocaml still boxed values, so there's no equivalent of value witness tables?

The Swift approach shares many of the same characteristics and trade-offs as "true" (aka Java-style) boxing. Of course, it does avoid some downsides of that, but also brings along some additional ones.

I think the main difference is that the Swift approach ends up being more memory-efficient, eg consider Swift's Array<Int> is stored as an array of integer values compared to a Java's ArrayList<Integer> where each element is boxed.

Also the Swift optimizer can clone and specialize generic functions within a module, eliminating the overhead of indirection through reified metadata.


Yes, there's some pitfalls of Java that Swift avoids, like compulsory allocation, but it fundamentally has the same style of indirection (often pointers to stack memory rather than heap, but still pointers), type erasure and pervasive virtual calls, and brings with it even more virtual calls due to needing to go through the value witness table to touch any value (which might be optimised out... or might not, especally with widespread resilience).

The compiler specializing/monomorphizing as an optimisation shows that Swift has a hybrid approach that tries to balance the trade-offs of both (like many other languages, like Haskell, and, with explicit programmer direction, Rust), but the two schemes still fall into the broad categories of monomorphizing vs dictionary passing, not something fundamentally different.


Oh, hi Huon.

> Guess what: they did.

Yeah as pointed out in that very thread by pjmlp:

    As usual, it lacks discussion about generics in:
    Eiffel
    Ada
    Modula-3
    D
    MLton
    .NET
    Rust
    Haskell
    OCaml
    Sather
    BETA
    CLU
Oh wait, they're pointing out that none of these is even remotely being discussed in the article hinting that they did not, in fact, and much prefer taking down strawmen they built from some sort of daguerreotypes of Java and C++.

> They just weren't ready to make any of the significant tradeoffs other languages have to do in order to support them.

Ah yes, the ever so convenient but never actually clarified "tradeoffs", which only ever happen for generics but oddly enough apparently don't happen for any other feature.

> there's definitely a lot of research in that area.

That there is, would be nice if Go's designers ever bothered actually using it.


Nitpick, but the idea is even older, going back at least to Girard (1971) and Reynolds (1974). :)

I don't know exactly what the problem is for Go. There are tradeoffs, e.g., just with the type system: impredicative, stratified or predicative quantification, implicit subtyping or explicit type instantiation, value restrictions, variance handling for mutable inductive types, Hindley-Milner or bidirectional typechecking, etc. There are more tradeoffs with the implementation. Fortunately, these are all well understood by now.

However, it is also true that many mainstream languages famously got generics wrong. What's most infurating about this situation is that a lot of research just gets ignored. If the question is really "should Go have generics whose design is based on C++ templates and/or Java generics" then the only sane answer is a resounding no.


You forgot CLU in 1975. :)

The classic Wizards lecture on generic '+ is good enough argument.

The numeric tower of lisps is the canonical use case of what generic functions are good for.

Smalltalk, the second greatest language after Lisp, is also good example.


> perhaps they can read the literature of the past 30yrs

You realize this post is talking about Go right? /s



That's an awesome trick, using angle brackets that aren't angle brackets!

It really reminds me of early C macros where people use the ## operator to synthesize monomorphized versions.

The ## operator is an ANSI C feature, so not really "early". In pre-ANSI preprocessors you had to abuse the lexer to paste tokens, e.g. by relying on // being deleted - nowadays comments are replaced by space.


They should look up the usage of empty interfaces [in the wild or in-house].

This is a great point. Other analysis one could do:

1. Look at uses of cast operators in a codebase. How many of those would be eliminated by making the surrounding operation generic?

2. Go through the language spec and see how many features could be moved out of the language and into the core library if generics were supported.


>This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues.

It seems to me to be the same "Generics is some foreign concept we just heard of, and we're not sure how to implement them in Go yet, but we'll consider it if we chance upon some magical optimal way" that has been the official party line of Go since forever...

Even the "we are asking you for proposals" part has been there since a decade or so...


It strikes me as a pretty odd statement, though. I'll admit that I'm not very familiar with Go, but surely you can look back at the benefits of generics in other languages? Go is hardly so specialised that you can't learn a single lesson from elsewhere.

I agree, I didn't find the statement refreshing at all, I found it insulting. You damn well know the use-cases for generics. If you still don't like it; fine, say that explicitly. Don't pussyfoot around it and play the "we just don't know that much about it yet" lie.

Not having generics makes it hard to do proper type safe functions and libraries.

My specific problem when i realized it was an actual problem was when i tried to connect to a sql databse and working with a proper ORM.


Why does an ORM need generics? I ask because I've built something very like an ORM in Go and I didn't have any problem without generics. ADTs on the other hand...

In my specific case it was the fact that i could not have one insert function or one update function.

I would need one for each and every struct(table).

These days there is a tool that can generate all those struct methods: https://github.com/vattle/sqlboiler

So from the ORM perspective we (as the community) have worked around it.


This is incidentally where .Net was around the 1.1 release (2003ish) We had code generation frameworks kicking out swathes of boiler plate.

Now Repository<T> and for us around 200,000 lines of code are gone and only one narrow set of tests to execute.


I guess I don't see how generics would help you reduce the number of insert/update functions. The basic problem of an ORM is to map struct fields to columns; I don't see how generics would help you here. Can you write the generic pseudocode you want to write?

I would guess something like:

    class Collection<T implements Entity> {
        void insert(T entity) {
            String vals = entity.props.map(escapeSql).join(",");
            String qs = entity.props.map(x => "?").join(",");
            PreparedStatement p = db.prepare("insert into %s (%s) values (%s);", this.tableName, qs, vals);
            db.submit(p);
        }
    }

Go supports that today. Slice is a generic collection. Map will need to become for loops, but that's minor.

The idea that we don't need generics because we can generate code is kind of ridiculous, and certainly doesn't pass the simplicity smell test.

He wasn't making that argument...

Neither did I say he was. He was linking to a library that works around the lack of generics by generating code.

The strongest typed ORM I've ever used is http://diesel.rs/

This code (okay I made the use line up because it's not on the website and I'm lazy, you do need one though):

  use some::stuff::for::the::dsl;
  
  let versions = Version::belonging_to(krate)
      .select(id)
      .order(num.desc())
      .limit(5);
  let downloads = version_downloads
      .filter(date.gt(now - 90.days()))
      .filter(version_id.eq(any(versions)))
      .order(date)
      .load::<Download>(&conn)?;
is completely, statically typed, with zero runtime overhead. Generics + type inference makes sure that everything is valid, and if you screw it up, you get a compiler error (which honestly are not very nice at the moment).

Thanks to generics, all of this checking is computed at compile time. The end resulting code is the equivalent of

  let results = handle.query("SELECT version_downloads.*
    WHERE date > (NOW() - '90 days')
      AND version_id = ANY(
        SELECT id FROM versions
          WHERE crate_id = 1
          ORDER BY num DESC
          LIMIT 5
      )
    ORDER BY date");
but you get a ton of checking at compile time. It can even, on the far end of things, connect to your database and ensure things like "the version_downloads table exists, the versions table exists, it has a crate_id column, it has a num column", etc.

You can absolutely create an ORM _without_ generics, but it cannot give you the same level of guarantees at compile time, with no overhead.


Ah, I think the bit where generics is useful is in making sure the value types are consistent (i.e., that you're not building a query that subtracts an int from a string or compares a date to a bool). This still doesn't guarantee that the types will match with the database columns, but it is a step up from the non-generic alternative.

I keep meaning to pick up Rust, still looking for some entry point that's interesting enough to get me off my butt.

How (well) does diesel deal with migrations? I've seen other ORMs fall down the stairs trying to deal with schema modification over time.


I think you meant to respond to Steve. I'm not a Rust guru (I keep trying it, but I don't have any applications for which it's reasonable to trade development time for extreme performance). I definitely don't know anything about diesel.

You have a migrations dir that contains the changes to schemas over time, so you can always get your table in to the state it needs to be in.

It can check that they match with the columns, yeah.

It also heavily relies on generic types to ensure that everything is well-formed, and to provide zero overhead.


How does the compiler know the types of the database columns? That information has to come from somewhere. Also, type checking is negligible from a performance perspective, so I the "zero overhead" is of minimal interest.

There is a macro (http://docs.diesel.rs/diesel/macro.infer_schema.html) that connects to the database and builds up structs for the table in the database it's connecting to.

Very cool, but that's not "because of generics" then. Not to denigrate the feat.

> How does the compiler know the types of the database columns?

There's two variants on a single way: basically, a "schema.rs" file contains your schema. You can write it out, and (with the help of migrations) it will update it when you create a new migration, or, you can have it literally connect to the DB at compile time and introspect the schema.

> Also, type checking is negligible from a performance perspective, so I the "zero overhead" is of minimal interest.

Ah, but it's not "the type checking is fast", it's the "diesel can generate hyper-optimized outputs that don't do runtime checks". Like, as an interesting example, Diesel can (well, will be, this has been designed but not implemented yet) actually be _faster_ than a literal SQL statement. Why? The SQL literal is in a string. That means, at runtime, your DB library has to parse the string, check that it's correct, convert it to the databases' wire format, and then ship it off. Thanks to strong generics, since the statements are checked at compile time, none of those steps need to be done at runtime; the compiler can pre-compile that DSL into postgres' wire format directly.

The reason this hasn't actually been implemented yet is that it's admittedly a tiny part of the overall time, and the Diesel maintainers have more pressing work to do. I use it as an example because it's an easier one to understand than some of the other shenanigans that Diesel does internally. It couldn't remove this kind of run-time overhead without generics.


This is all very neat, but I think Go could do all of this too, except that "query compile time" would happen on application init or on the first query or some such. Still, very cool and good work to the diesel team!

The whole point of an ideal type system is that if code compiles, it is provably correct and won’t have bugs.

Few typesystems come close to that (Coq is one of the few languages where the system might be advanced enough), but generally you want to completely remove entire classes of errors.


Sure, and you get runtime errors instead of compile time errors. That's the point.

I keep meaning to pick up Rust, still looking for some entry point that's interesting enough to get me off my butt.

How (well) does diesel deal with migrations? I've seen other ORMs fall down the stairs trying to deal with schema modification over time.


You write raw SQL files that do exactly what you want them to do: https://github.com/rust-lang-nursery/thanks/tree/master/migr... each of these has an up.sql and a down.sql

"diesel migraiton run" will run all pending migrations, there's revert/redo as well. "diesel migration generate" can generate these files and directories for you; it doesn't yet (that I know of) have helpers to write the SQL though (like Rails does for its migrations).

On deploy, it runs the migrations first https://github.com/rust-lang-nursery/thanks/blob/master/buil...

I believe there are some interesting improvements coming down the line in the future, but for now, it works pretty well.


> This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues.

Nah, this is pretty much the same answer they've been giving all along, lots of handwaving about how generics are so cutting edge that no one can figure out how to use them effectively yet ("we just can't wrap our heads around this crazy idea!")

In other words, don't count on them in Go 2.


I would start using Go in my projects if it introduces generics. It's a show stopper for me.

The question is _why_ do you need generics, for what use case(s), etc? Saying “I need generics otherwise I won’t use Go” is exactly the type of feedback they don’t want.

It seems like you’d have valuable feedback given that it’s a “showstopper” for you.


Not the GP, but you can think about generics as a better kind of interfaces :

- performance wise: no dynamic dispatch, means no virtual function call, means faster code.

- type safety: let say I want a data structure that can store anything but everything in it must be the same type. You just can't do that with Go's current type system without introspection (which is cumbersome and really slow).

Generics already exists in Go : maps, slices and channels and generic, and it was mandatory for performance reason. The problem of not having generics is that the community can't build its own implementation of useful data structures : for instance, until the last release Go lacked a thread-safe map. That's fine, let's use a library for that … Nope, can't implement that because, “no generics”.

Generics are rarely useful in your own code, but they allow people to create good abstractions in their libraries. Without generics, the ecosystem growth is limited.


> you can think about generics as a better kind of interfaces

No, you can't, because generics don't provide the key feature of interfaces: run-time dynamism.

You could make a case that static polymorphism and dynamic polymorphism could be accomplished by a single mechanism, as traits do in Rust, but you can't say generics are "better" than interfaces, since they solve a different set of problems, with different implementation strategies, despite related theoretical foundations.

> The problem of not having generics is that the community can't build its own implementation of useful data structures

This is not true either, although it is true that it is harder to do this sort of thing in Go, the result may in fact not be as fast as you can achieve with monomorphisation, and you may have to write more code by hand.

The "trick" is the Go model is to provide an interface (which yes, potentially unnecessarily uses a dynamic mechanism) to implement indirect operations. For example the Swap function of https://golang.org/pkg/sort/#Interface enables https://golang.org/pkg/container/heap/#Interface - which is a reusable data structure. Compiler-level static-generics for slices and arrays, combined with indirecting through array indexes, enables you to avoid boxing when instantiating the heap interface. The data structure can be instantiated with an interface once, so there is no dynamic lookup for heap operations, but there is still a double indirect call.

Yes, this is less convenient than proper generics, but this doesn't mean you can't do these things.

Furthermore, I've never actually needed to do this outside of sorting (which is, absolutely, is annoying) in large projects. Almost every time I need a specialized data structure, I usually need it for performance reasons, and so hand-specializing for my needs tends to be worthwhile anyway.

> Generics are rarely useful in your own code, but they allow people to create good abstractions in their libraries.

I'd like to see something like templates or generics for Go, but I definitely don't want Java-style "fake" generics without code specialization. Furthermore, I found that the vast majority of generics-heavy libraries back in my C# days were more trouble than they are worth, despite meaningful opportunities for better performance with value types. Copy/paste almost always resulted in a better outcome.


> Yes, this is less convenient than proper generics, but this doesn't mean you can't do these things.

Yep, assembly is less convenient than Go, but it doesn't mean you can't write programs using assembly.


> for what use case(s)

i almost feel this is like asking bjarne what use cases there are for 'classes' in c-with-classes. it's a structural language change that (depending on implementation details) can have sweeping ramifications on how one writes code.

type parameterization allows for the expression of algorithms, idioms and patterns wholly or partially separate from concrete type details. i'm sure others can better sell the topic though.


Do you not see how this is an insulting question? You know very well the use cases for generics. You do not need users to present new ones. You can literally Google "generics use cases" and get hundreds of thousands of results that directly answer that question.

Did you even read the blog post? Such feedback was explicitly asked for.

I think the problem everyone has swallowing that ask is that the value of generics is that it's so widely taught, with so many use cases, and so much literature (academic and otherwise) that the suggestion that they need use cases is laughable.

What use cases do they need other than the extremely large body of public knowledge on the matter, and why would one more example change anyone's mind?

To me this represents the epitome of the insular and outwardly hostile attitude that the Go team has toward good ideas elsewhere in the CS world. Would it really make a difference to the team if I or anyone else were to hunt down specific examples and present them with problems solved with generics and stronger type systems?

I doubt it.


Off topic, but any chance UNI is your alma mater? I think we may have been there at the same time, judging by your name and some of your comments.

Aye.

Asking for use cases when the use cases and benefits are already well-known seems pretty disingenuous to me. (Of both you and the blog post.)

... but, hey, just to indulge you, a few off the top of my head:

- Generic containers in libraries (aka not built-in)

- Parametricity to restrict user-supplied implementations of interfaces; not quite as valuable in a language with casts, etc., but still reasonably valuable.

- To give a sound account for the currently-magic built-in types.

That should be enough, frankly, but I'm sure I could come up with more if I were to spend 5 more minutes on it...

Can we stop pretending that they don't know of any compelling use cases now?


There are many different ways to provide generics (templates, typeclasses, etc.), each with their own pros and cons. It's not simply a matter of "add generics". And what solution they do come up with is going to bring the given cons those who would be better served by a different generics solution.

The Go team have been long criticized for choosing the option that fits Google, but not the rest of the world. This seems like their attempt to think about what others are doing with the language, beyond their insular experience, so they don't end up with something that fits Google perfectly but falls apart everywhere else.

If they don't take the time to learn how people intend to use generics in Go, the best solution for Google, and Google alone, is what we will get.


Then they should ask for that instead.

It's not unreasonable to ask "how do we implement this in the best way?", but I'll note a) that's not what was asked, and b) I have a hard time believing that code at Google is so incredibly "special" that they need a special kind of generics. Also, if Google is such a unique snowflake why not ask the Google people directly rather than the "Go community"?


> "how do we implement this in the best way?"

They asked for use cases from a wide range of people to ensure they implement it in the best way. Subtly different, but essentially the same request.

> I have a hard time believing that code at Google is so incredibly "special" that they need a special kind of generics.

I didn't say they need special generics. I said the approach that works best at Google may not be the best approach for the population at large. If Google is best served by C++-style generics, while the community as a whole would be better served by Haskell-style generics, why just jump in and do it the C++ way before seeing how others might want to use it?

> Also, if Google is such a unique snowflake why not ask the Google people directly rather than the "Go community"?

Because they are trying to avoid the mistake of using one datapoint like Go has struggled with in the past? They know what works in Google, but that doesn't necessarily work for everyone else. See: Package dependencies, among many.


> They asked for use cases from a wide range of people to ensure they implement it in the best way. Subtly different, but essentially the same request.

Phrasing is important, and obviously (from the reactions of me and others in the thread) the phrasing was way off and perceived as condescending and lazy.

> I didn't say they need special generics. I said the approach that works best at Google may not be the best approach for the population at large. If Google is best served by C++-style generics, while the community as a whole would be better served by Haskell-style generics, why just jump in and do it the C++ way before seeing how others might want to use it?

OK, so you said they don't need a special generics, but then say that they do. I must not be understanding what you're trying to say. (If this is about choosing trade-offs, then see the other poster who talked about trade-offs. Executive summary: Not doing generics is also a trade-off.)

Also: ASK GOOGLE.

> Because they are trying to avoid the mistake of using one datapoint like Go has struggled with in the past? They know what works in Google, but that doesn't necessarily work for everyone else. See: Package dependencies, among many.

You can't have it both ways. Either Google is important enough to it in a way that works for them, or the community is more important and they get to choose.

Anyway, I'm done with this conversation. I think we may be seeing this from viewpoints that are so different that it's pointless to continue.

I'm not sure how to argue constructively with someone who says "I'm not saying X, but..." and then immediately states a rephrasing of "X". I'm sure that's not what you think you are doing, but that's the way I'm seeing it, FWIW.


> Phrasing is important, and obviously (from the reactions of me and others in the thread) the phrasing was way off and perceived as condescending and lazy.

Maybe, but I don't know that we should be attacking someone's poor communication ability. I'm sure I've misunderstood at least one of your points too. Let's just focus on what was actually asked for: Use-case examples.

> OK, so you said they don't need a special generics, but then say that they do.

There isn't an all encompassing 'generics'. Generics is a broad category of different ways to achieve reusable statements across varying types, in a type-safe manner. To try and draw an analogy, it is kind of like functional and imperative programming. Both achieve the function of providing a way to write programs, but the paradigms differ greatly. Each with their own pros and cons.

If imperative programming is the best choice for Google, that doesn't mean the community wouldn't be better served by functional programming, so to speak. And when it comes to generics, there are quite a few different paradigms that can be used, and not easily mixed-and-matched. They are asking for use-cases to determine which generics paradigm fits not only the needs at Google, but the needs everywhere.

> Also: ASK GOOGLE.

The Go team is Google. They have asked Google. Now they are asking everyone else. I'm not sure how to make this any more clear.

> Either Google is important enough to it in a way that works for them, or the community is more important and they get to choose.

In the past Google was seen as the most important, and they have been widely criticized for it. This is them moving in a direction that favours the community. And they are still being criticized for it... Funny how that works.


> There isn't an all encompassing 'generics'. Generics is a broad category of different ways to achieve reusable statements across varying types, in a type-safe manner. To try and draw an analogy, it is kind of like functional and imperative programming. Both achieve the function of providing a way to write programs, but the paradigms differ greatly. Each with their own pros and cons.

Yes, thank you. Everybody in this thread already knows that. PICK ONE.

(EDIT: I should also add: Since Go is structually typed and has mutable variables, that should be a good indication of what to do and what not to do. See e.g. Java arrays, variance and covarince.)

> If imperative programming is the best choice for Google, that doesn't mean the community wouldn't be better served by functional programming, so to speak. And when it comes to generics, there are quite a few different paradigms that can be used, and not easily mixed-and-matched. They are asking for use-cases to determine which generics paradigm fits not only the needs at Google, but the needs everywhere.

And now you're trying to bring imperative vs. functional into this?

I think IHBT... and this really is my last comment in this thread. Have a good $WHATEVER.


> Yes, thank you. Everybody in this thread already knows that. PICK ONE.

They are picking one, based on the use-cases that will be given to them in the near future. I don't understand what your point is here.

> Since Go is structually typed and has mutable variables, that should be a good indication of what to do and what not to do.

That may be true, but the worst case scenario is that they learn nothing from the examples they get. You don't have to provide any if you feel it is a useless endeavour. If it gives them comfort, so be it.

> And now you're trying to bring imperative vs. functional into this?

It wasn't clear if you understood what I meant by there being different ways to provide generics. I struggled to write it as eloquently as I had hoped and the analogy was meant to bridge what gaps may have existed.

It's almost like communication can be difficult. Where have I see that before? Hmm...


(Just to end it. I was intrigued.)

> It's almost like communication can be difficult.

We can agree about that! :)

> Where have I see that before? Hmm...

Not sure what you mean (ironically?).

Have a good night :).


I was merely suggesting to indulge the author, especially now that generics might actually happen.

The problem is that the community did indulge the author and the other Go maintainers 5 years ago and their was an apparent refusal to see lack-of-generics as an issue, almost as though Turing-completeness was sufficient justification to not include them.

"Find us examples and show us so we can ponder this further" is incredibly condescending after we did that 5 years ago and they decided to stall (or to use the author's term, "wait") on the issue. Honestly I think it might be too late for generics to be added, because the large body of existing interfaces aren't generic and likely would have a very high transition cost.


Because I like both reusable implementations of algorithms across compatible types AND type safety.

It's not rocket science.

And because it's 2017.


Create an ordered set or bloom filter type for Go and let us know how you make out.

> The question is _why_ do you need generics

Why do we need anything beyond assembler?


Try implementing LINQ in Go, while using no runtime type assertions at all and maintaining compile-time type safety (ie: no interface{}).

Then you'll wish you had generics. :)


The benefits of generics are well understood.

So why is he requesting feedback on the matter?

Bad faith.

That's what we're all trying to figure out.

Collections, for example. Functional reactive programming. Things like I write in Java now: https://github.com/JetBrains/jetpad-mapper

I remember when Java started to have generics in 1.5. It was very hard to go back to 1.4 after having used that. I feel that I just can't program without them any longer.


You might be interested in my side project: https://github.com/lukechampine/ply

It's a small superset of Go that allows you to call map/filter/reduce methods on any slice. It's not true generics (you can't define your own generic functions or types) but it covers the most common use case for me: declarative data processing.


What do you use?

Conversely, I would stop using Go in my projects if it introduces generics.

If Go wants to be more sophisticated, there are other sophisticated languages I can use out there like Haskell, Rust, F#, Scala, etc. The problem I have learned in my career is "sophistication" usually gets in the way of getting things done because now you have all these clever developers full of hubris creating these novel implementations that nobody else in the world understands, and your project fails but hey -- at least you have a dope, terse, generic implementation -- totally rad.


So you'd rather take a half-dozen, half-assed implementations of templates, macros, flaky cast-to-interface things and a bunch of incompatible weirdness rathern than a singular, reasonably effective pattern that's been proven successful over literally hundreds of programming languages?

> Conversely, I would stop using Go in my projects if it introduces generics

Depending how they're implemented, you might not have to use them (even via libraries). Not using the language altogether is a bit dramatic.


That's never really true, because using a language means using other people's libraries, and using libraries means debugging and extending them, so you eventually end up needing to work with anything the language can serve up.

This is my beef with JavaScriot these days... They introduced a slew of new, in my opinion insane, control structures, and while I can avoid them in my code, they are still slowly polluting my dependencies, and adding obscure runtime requirements for my projects.


Parsimony is king.


The idea of needing to come up with use cases for generics is baffling. The existence of generics in numerous other languages already support of plethora of use cases. I really don't get that statement at all.

Then why don't you come up with a legitimate use case?

Because generics have been around for decades, and anyone who can't be bothered to look into the use cases for a feature that spans numerous languages over that time period doesn't deserve the time it takes someone else to spoon feed this information to them. This goes for the author of the article as well.

I think this is a fair question. We have a lot of pseudo-intellectuals here that think 90% of their job isn't writing business functions. Not having generics in Go has not hindered me at all in performing the objectives of my business. Everyone wants generics, no one knows why. When I had generics in my previous two roles where I used Java and C# respectively, I can count on two fingers the number of times I needed them, and once was because a library forced me to.

> Everyone wants generics, no one knows why

This is such a troll'ish statement, but I'll respond anyways because maybe you actually are just that uninformed. Without generics any number of libraries that utilize generic function blocks - Func [1] in C#, Callable [2] in Java, etc., and do things like return the function's type parameter as a method result, would not be possible. This is exceedingly common, at least in libraries. If you want to know how common, let me refer you to my friend http://google.com.

Just because something isn't valuable to you, personally, doesn't mean it's not valuable. As in all aspects of life, not everyone is you.

1. https://msdn.microsoft.com/en-us/library/bb549151(v=vs.110)....

2. https://docs.oracle.com/javase/7/docs/api/java/util/concurre...


I am another developer who has never really suffered as a result of a lack of generics. Personally I really like how dead easy golang code is to read and understand. God forbid golang become a language where untold amounts of logic can be packed into a single line of code, resulting in near mathematical levels of complexity which always require more time to understand. Like most complex concepts in life, I suspect the number of developers who can effectively use these tools is rather small compared to the number of people who think they "need" them. But hey, I'm a vim programmer who doesn't like to have to rely on a bloated IDE to be able to deal with the code that I am interacting with so I might be a minority.

I think this is too little, too late. Anyone who saw an advantage to using generic programming is already using something else. They've already invested resources in something besides Go; why spend the effort to switch back for what will, at best, probably be very mediocre support for parametricity compared to whatever they're already using?

This presumes that:

a) generic programming is a new thing

b) any of those people you mention aren't using go for some things and other things where this feature is desired

c) that the feature will 'at best' be at level X

d) that level X won't be enough to satisfy some use cases

and a whole host of other things, not least of which the fact that go itself was invented to suit some purpose after other languages existed, and has been successful for many users


Most of the projects which will use Go have not begun yet. Adding support means people may use Go for those future projects.

The opposite side of that coin is that other programming languages do already exist.

Any new language is already playing catch-up to reach the state of the art. Go is 10 years old and in those 10 years has made astonishingly little progress at implementing basic features such as error handling, because the designers cannot escape their bubble.

The most sensible thing to do is to kill Go, not to improve it. The root problem is not the lack of this or that feature, but the bizarre beliefs of the people running the project.


> The most sensible thing to do is to kill Go, not to improve it.

Care to expand this? It's pretty clear that you don't find Go suitable for your own use cases, but I've found it to be an extremely productive language, despite the fact that it may have a couple of warts.


The fact that go1.9 will include a "sync.Map" type which should be generic, but can't be due to the language's limitations should already answer that.

See also all of the "container" portion of the stdlib: https://golang.org/pkg/container/

All of those should be generic, but aren't.

Clearly there's already sufficient demand for generic collections just based on the stdlib and the fact sync.Map was seen as needed.


"All of those should be generic, but aren't."

I think even more damning then "these should be generic" is that there should be more of them. The real problem is the lack of generics inhibits the introduction of more data structures because they are hard to use.

It is true that arrays and maps/hashes take you a long way, but there's a lot of other useful data structures in the world. And despite the fact they may look easy at first, data structure code tends to be notoriously difficult to write both correctly and with high performance.


> All of those should be generic, but aren't.

They aren't because the prevailing attitude is "what do you need generics for? Just do a cast!"

The issue is the Go developers are unwilling to consider generics until someone can come up with a problem that absolutely, positively cannot be solved without generics. And of course there isn't one.


I mean, having to cast after every single call to heap is a problem.

Go is turing complete already, so all problems are already able to be solved. :)


Agree- having map, filter, reduce with strong type guarantees will be a huge boon. Imagine being able to map over a collection and the code hints will understand the new output types in the chain. This is something you cannot have when your higher order ops are all typed to Interface {}.

Unfortunately, Go is a language whose original creator is of the opinion that map/filter/reduce are just ways to make your program "fit on one line"[1], and that you should just use a for loop[2].

[1]: https://groups.google.com/forum/#!topic/golang-nuts/RKymTuSC... [2]: https://github.com/robpike/filter


I'm not sure why this is being downvoted. These are perfect examples of where generics would make a genuine improvement.

The sync.Map case for generics is a perfect example. Thanks for bringing it up.

Go 2, codename "Hell with generics". Just kidding of course. I am sure some useful form of generics will eventually find its way in.

More importantly though, looking at C++ I think it may be hard to come up with a generics system that doesn't lend itself to abuse and ridiculous mega constructs. I would love to see something that provides power in ways that disallow craziness (Boost Spirit kind) but provide enough power to avoid all the cases that suffer without generics.

The craziness is preferable to passing void* or Object or Interface{} everywhere. At that point, you might as well use a language without strong typing, for all the safety you're going to get. You can end up with just as much craziness with void* and Object and Interface{}.

The Go team needs to stop trying to control what people do with their language. If people come up with crazy implementations, it's their own problem. It's up to the Go community to provide alternatives to prevent those crazy implementations from becoming standardized in Go programming culture. Refusing to implement generics because programmers might abuse them is paternalistic and counterproductive.


"If people come up with crazy implementations, it's their own problem."

Some people are turned away from C++ after seeing random bits of crazy language/template misuse, although when used sanely C++ could've served them nicely for their job.

Same thing is going to happen to Swift very soon.

https://engineering.kitchenstories.io/comfortable-api-reques...

Those function signatures are definitely not... self-documenting, I'd say.


Those signatures seem complicated to me because there's a pile of arguments and some of them are functions with extra annotations, not because there's generics involved. The equivalent signatures using interface{} (or maybe something more precise) would be likely just as noisy, if not more so.

Using interface{} you never get implicit/silent conversion, so Go type system is strong (and static obviously). It just lack parametric polymorphism.

Both C# and Java have taken different approaches to constrained generics that are still really useful for about...I dunno, the 90% case (C# has reified generics and so it's more like 95%, but type erasure also has its pluses when you're off-roading around in a framework somewhere). Scala goes a little further and covers, like, the 99% case, and plenty of inadvisable ones too.

C++ doesn't have generics, it has templates, which are effectively a syntax-constrained form of macros. That's why it lends itself to abuse (or creative use, depending on your take on macros).

So C++ generates separate code for each template type and C# and Java do a typecheck at compile time and reuse the same codepath for all types at runtime?

.NET does a mixture of C++ and Java, meaning generic instatiations for value types get their own version of the code, instatiations for reference types get a shared version.

It's worth noting that in case of C#, it's really an implementation detail. It could just as well share all instantiations by boxing and adding a bunch of runtime checks - the language semantics wouldn't change because of it.

In C++, on the other hand, separate compilation of template for each instantiation is effectively part of language semantics, because it has numerous observable effects.


"2 Hell with generics", or "Go 2 Hell with generics"

/jk 2


I would already be happy if they supported generics like CLU did in 1975, no need for nothing too fancy.

This is just another way of saying fuck generics. They really can't see any solvable problems after years of people asking for generics?


That's a charitable way to look at the comment.

My more cynical reaction is that someone who doesn't understand the benefit of generics or can't even come up with examples where generics are superior to their alternative should not be allowed anywhere near a language design discussion.


Java`s generics have had issues due to use site variance, plus the language isn't expressive enough, leading its users into a corner where they start wishing for reified generics (although arguably it's a case of missing the forest from the trees).

But even so, even with all the shortcomings, once Java 5 was released people migrated to usage of generics, even if generics in Java are totally optional by design.

My guess to why that happens is that the extra type safety and expressivity is definitely worth it in a language and without generics that type system ends up staying in your way. I personally can tolerate many things, but not a language without generics.

You might as well use a dynamic language. Not Python of course, but something like Erlang would definitely fit the bill for Google's notion of "systems programming".

The Go designers are right to not want to introduce generics though, because if you don't plan for generics from the get go, you inevitably end up with a broken implementation due to backwards compatibility concerns, just like Java before it.

But just like Java before it, Go will have half-assed generics. It's inevitable.

Personally I'm sad because Google had an opportunity to introduce a better language, given their marketing muscle. New mainstream languages are in fact a rare event. They had an opportunity here to really improve the status quo. And we got Go, yay!


After some initial enthusiasm due to its gentle learning curve (actually, the Golang learning curve is nearly non-existent for an experienced programmer), I got sick of Go programming pretty quickly. Writing anything non-trivial will have you fighting against the limitations of the language (e.g., lack of generics which is particularly aggravating when trying to write a library).

I've mostly moved on to Clojure for programming backend services. I still use Go for the occasional small utility program due to its speed and ease of deployment, though.


I've programmed scores of libraries in Go and found it pleasant, in fact it's more pleasant writing a library in Go than in any other language I've used.

I've never once even considered the fact that I might need generics because I've never run into an unsolvable problem or an extremely inelegant engineering solution. I see a lot of people claiming that the lack of generics is a big problem but no one is actually showing a concrete case where generics are necessary.

C doesn't have generics but we never have this discussion when we talk about C.


If you have a []T and you want to apply a function to make it into []K, you need to explicitly make a loop in every case. You might not think that's that bad, but note that T and K could also be interfaces such that T is a superset of K. In that case Go, T can be used as K but []T cannot be used as []K (meaning you have to do an explicit loop for something that should be an implicit type conversion).

That is a trivial example where generics help. And yes, the endless boxing and runtime type assertion does make your code slower eventually (see the comment from the libcuckoo authors).

This is one of the reasons why the container and sort stdlb libraries are such a joke. They're not generic so they have to special case every different type. With generics, the entire sort package would just be a function that takes []Ordinal and sorts it. Container would similarly only require one implementation of each data structure (because they would be generic).

I find it hard to believe that you've programmed "scores of Go" and you've never hit any of these issues. I also have programmed plenty of Go, and these problems are just frustrating.


Been using it for years now in a setting with lots of data and never had any problems.

If that's true then you might not benefit from generics directly. But you will benefit from having an ecosystem of libraries that take advantage of generics. And even if you don't use other people's libraries, not everyone in the world happens to be as lucky as you and they do need generics to make their lives easier.

But again, I still severely doubt you've never run into a problem with Go's lack of generics. I ran into it after about 3 months of working with Go. Maybe you didn't find the work-arounds problematic, but that doesn't mean that the original limitation (no generics) doesn't cause problems. You just choose to accept the work-arounds as the "right way" of doing things.

If you haven't already, I would recommend looking at Rust or another modern language that has generics to see what sort of benefits you can get out of it that you can't directly get with Go. Personally the fact that if T implements an interface I won't let you use []T as []I is pretty silly IMO. Rust lets you do that with the From trait, and you can even do it even more generically with the From trait using a generic T.


> C doesn't have generics but we never have this discussion when we talk about C.

Sure it does, a light weight version was introduced in C11.

http://en.cppreference.com/w/c/language/generic

Besides, C gets an excuse because the language was designed before generics were even invented, and with pre-processor macros is it possible to implement a poor man's generic system.


Maybe you don't, but I often have a very similar discussion in C code reviews: The unfortunately ubiquitous use of void pointers.

Similar background here, but I found Go refreshingly liberating because everything in the language makes sense and there isn't a lot of magic going on. I might write a bit more code but its crystal clear how it works and more importantly I can come back in 6 months and grok the code. In face, this is the first language i've used in 25 years of coding where I can read anyones code and grok it. go fmt might be the best part of go.

The point of Go 2.0, IIUC, is that it throws away backwards-compatibility guarantees, so if they want to do something in Go 2.0 that breaks correct go1 code to support generics, that's on the table. So backwards-compatibility nastiness is not guaranteed.

Not quite. The article mentions that Go 1 code needs to be able to coexist with Go 2 in the same codebase.

So, at runtime, Go 1 code needs to be able to work with Go 2 types. That will impose some restrictions, I imagine.


>The article mentions that Go 1 code needs to be able to coexist with Go 2 in the same codebase.

Interesting. Are there any other languages where that is allowed? I'm assuming that by "codebase" you mean all the code that gets compiled into one program or library (for compiled languages), or is part of one program or library (for interpreted languages). And I don't mean the case where only a common subset of Lang X v1 features and Lang X v2 features are used in the same codebase (because in that case, there is no issue). That latter case is possible, for example, in Python, and probably some many languages too. In fact by the definition of the case, it should be possible in all languages.


>>>>The article mentions that Go 1 code needs to be able to coexist with Go 2 in the same codebase.<<<<

>>Interesting. Are there any other languages where that is allowed?<<

C / C++

Perl 5/6

The various language feature pragmas in Haskell.


Interesting, didn't know this about C/C++, though I may have guessed it about Perl.

I would just read the article. Basically, the restriction they want to impose is to make sure that you can import v2 code into a v1 codebase and vice-versa with a v2 compiler. So that means any syntax breakage would need to be versioned or otherwise inferred, I guess.

> I'm assuming that by "codebase" you mean all the code that gets compiled into one program or library

Correct.

> Are there any other languages where that is allowed?

One would be Racket, a Lisp which allows you to pick a syntax flavour using the #lang directive: https://docs.racket-lang.org/guide/Module_Syntax.html#%28par...


My guess as to why people in Java use generics so much and switched to them so quickly is the squiggly yellow lines in their IDEs. A lot of Java code is easier to read/write with explicit and implicit casting and no use of generics. And the type erasure makes for sometimes unintuitive behavior at runtime. Being familiar with Java, I appreciate Go's skepticism toward generics because maybe just maybe they'll figure out a way to add them after the fact in an elegant way.

> Java`s generics have had issues due to use site variance

Use site variance can be really powerful when you're dealing with invariant data structures like Lists. You can then treat them like covariant or contravariant depending on a particular method call at hand.


I get that everyone would love to have a functional language that's eager by default with optional lazy constructs, great polymorphism, statically typed with inference, generics, great concurrency story, an efficient GC, that compiles quickly to self contained binaries with simple and effective tooling which takes only seconds to setup while giving you perfomance that equals java and can rival C, with a low memory footprint.

But, I don't know of one, and maybe that's because the Go team is right, some tradeoffs need to be made, and they did, and so Go is what it is. You can't add all the other great features you want and eat the Go cake too.

Disclaimer: I'm no language design expert. Just thinking this from the fact that I've yet to hear of such a language.


Honestly, no. For starters, I don't want a functional language (at least not a purely functional one). Purely functional programming has some distinct downsides. My preferences are firmly on the multiparadigm side, and I'm perfectly happy with a reasonable imperative language (that still supports higher-order functions and closures).

What I'm not happy with is language designers seemingly ignoring the state of the art even when it comes to picking low-hanging fruit.


Sounds like Scala might be what you are looking for if multi paradigm is your thing.

Thank you, but ...

First, I'm not looking for a language. Programming languages are part of my field. I was making an observation rather than expressing a need. If there's a language that's not totally obscure or esoteric, I've probably written some serious code in it at some time.

Second, I know and use Scala (among other languages) and have done so since the 1.x versions, but the JVM dependency is too often a problem (and Scala Native is not yet mature).


With the possible exception of the GC, d(lang) fits those parameters. Including optional lazyness, performance etc.

Nearly every item on your list is available with OCaml.

Apart from "great concurrency story", because of the well known problems with the GC. This might be a huge shortcoming for a number of people.

Multicore OCaml is in the works and is looking amazing, but is indeed a ways away. They're taking their time and doing it properly so things are fast and safe.

The thing is, until it's shipped, it's still vaporware for most people, especially if they want to use it in a commercial setting.

KDE 4 was released in 2008 and it was supposed to be ported to Windows. We're in 2017 and KDE 5 is still far from being considered stable on Windows. And I'm sure you can also think of many examples of programs promised and never delivered or underdelivered.


> was supposed to be ported to Windows

It was ported to Windows. I cannot check the remarks about stability because I use Linux, but it sounds plausible. There's only a handful (maybe 2 or 3) developers working on Windows support in their spare-time AFAIK. I would guess that KDE had anticipated more Windows developers joining the project as it progressed towards maturity.

That's always the problem with open-source projects: It's very hard to do planning and forecasting with a bunch of volunteers. Even if they commit to a roadmap, there will always be someone who has to step down because of private issues (new job, new child, etc.). Go is in a much better position since Google has headcount assigned to it (again, AFAIK).


F# is pretty good there, especially if one uses a library like Hopac. However I think it's still not on the level of Go, since concurrent operations are represented through monadic types (like Job<T>), and so everything has one level of indirection. F# workflow builders ("do notation") makes that somewhat more bearable, but I still think it's harder to work with than just using normal function calls on normal lightweight threads, like available in Go.

It isn't a great concurrency story, but the shortcomings are also overstated. OCaml does message-passing concurrency just fine and allows for shared memory for arrays and matrices of scalars, which is good enough for most scenarios.

(Technically, there's also Ocamlnet's multicore library, but that's too low-level for most people.)


A large standard library, too?

it's a bummer that the windows story for ocaml is so sketchy. i'd like to use it for work (various utilities and whatnot) but the ecosystem seems to make a lot of assumptions about the system it runs on. (for me there's still haskell and rust at least.)

How about F#?

All the advantages of ocaml, cross platform, great tooling

I really enjoy writing OCaml but I hate all the tooling around it. It lacks a good package management and build system à la Cargo.

The build system is a bit funky but the opam package manager is actually one of the nicest I have used.

jbuilder is really nice and the ecosystem is picking it up quickly.

Except no one's ever heard of OCaml.

Depends how good their CS degree was.

Rust is probably what you are looking for.

> a functional language

Has closures.

Has pattern matching.

Has algebraic data types (however variants can't have generic parameters not present on the data type itself, but you can use trait objects to do that).

Functions that don't take &mut parameters, & parameters with types with interior mutability and don't call functions that do I/O are pure (although they may not terminate and may abort).

> optional lazy constructs

"Automatic" lazy costructs are usually unnecessary and thus almost never used, but can be implemented as a library using macros.

Can also just use closures and call them explicitly.

> great polymorphism

Has trait/typeclass polymorphism with either monomorphization or dynamic dispatch.

Structures can't be inherited, but composition gives equivalent results and is the proper way to do it.

There have been several proposals for inheriting structures though, so maybe it will be available at some point, despite not being advisable.

> statically typed with inference

Yep.

> generics

Yup.

Not higher kinded, no constant integer parameters and not variadic yet, but those features should be available in the future.

> great concurrency story

Rust is the only popular language supporting safe concurrency without giving up the ability to share memory between threads.

> an efficient GC

It turns out that a GC is unnecessary, and that reference counting is enough, which is what Rust provides (although it's still relatively rarely used).

If you insist on a tracing GC, there are/have been several attempts to add it, and maybe there will be a mature one in the future.

In particular, there is a "rust-gc" crate, and Servo has a working integration with the SpiderMonkey GC, which I suppose should be possible for others to use with some work.

> compiles quickly

Work in progress (incremental compilation, more parallelization).

> self contained binaries

Yes (or optionally using shared libraries).

> simple and effective tooling which takes only seconds to setup

rustup can install the compiler and package manager in one line.

If you want an IDE, you'll have to install that too separately. Work in progress on improving IDE functionality and ease of use.

> giving you perfomance that equals java and can rival C

Performance is at least theoretically the same as C since the generated assembly code is conceptually the same (other than array bounds checking, which can be opted out from with unsafe code).

> low memory footprint

Same as C.


How do you get self-contained binaries with Rust? There's a lot of talk but I've not found a definitive "produces a static library" build guide.

They're the default, even when you include 100+ packages via Cargo. The only issue I've faced is standard Linux cross-distro issues due to libc versions. Even most of the crates that handle bindings to C libraries link them statically by default.

As my sibling commentor says, they're the default. There are some details though:

Rust uses glibc by default, which must be dynamically linked. It's usually the only thing that's dynamic about Rust binaries, but it does mean that compiling on on old CentOS box is a decent idea if you want a wide range of compatibility.

Alternatively, you can use MUSL, which works like this:

    $ rustup target add x86_64-unknown-linux-musl
    $ cargo build --target=x86_64-unknown-linux-musl
Boom, now libc is statically linked too.

The default is that all Rust code is statically linked, but since you may link C code as well, that may or may not be statically linked. It's done by packages, so there's no real default. Many of them default to static and provide the option of dynamic.

That's why there's not really a guide, that's really all there is to it.


Sounds like you want Eager Haskell.

Idris is basically an eager-evaluated Haskell, with other niceties on top (dependent types if you want them; fixed String type)

yes please

OCaml might become this once multicore happens.

Here be Opinions:

I hate generics. also, I hate exceptions.

Too many people are wanting "magic" in their software. All some people want is to write the "Happy Path" through their code to get some Glory.

If it's your pet project to control your toilet with tweets then that's fine. But if it's for a program that will run 24/7 without human intervention then the code had better be plain, filled with the Unhappy Paths and boring.

Better one hour writing "if err" than two hours looking at logs at ohshit.30am.


there's nothing magic about generics. every time you make a channel or a map in go, you're using a generic function even if go people don't want you to call it that.

there's nothing magic about exceptions, too, it's that it's harder than necessary to use them correctly and that's why it's not as big of a deal to not have them in go - as evidenced by this thread.


I can sympathize with the dislike of exceptions. I very much prefer Rust's error handling model (return Result<T, Err>)

The fact that Go has had this much success without generics is mind-boggling to me.


Because Google.

Go is quite similar to Limbo, by some of the same authors, how much success did it have outside AT&T?


Limbo was not freely available.

Vita Nuova has it on an open source license since March 2000, which means it has been freely available for the last 17 years!

Yes, go has generics, or at least parameterized types, used for channels and maps. What it doesn't have are user-defined parameterized types.

I think that's the point really. I think the average programmer like using generics, but rarely builds their own code with them. I've used them from time to time to great effect, but the use cases were pretty isolated. Maybe that makes me average? /shrug

Curious, how is a map generic?

If you declare a map[string]int, Go will guarantee that keys are always strings and values are always ints, and it's a compile-time error to pass it to someone expecting a map[string]float64. Without generics, the only way to do that is to generate and compile a MapStringInt struct and a MapStringFloat64 struct and a...

In Go, the magic right now is actually in special-cased types like map. Generics actually have the potential to reduce the amount of magic as it exists currently.

The design can go badly wrong, of course, but it can also go wonderfully right. Generics are a very important feature to consider for the language.

I have no opinion on exceptions with regard to Go specifically. I think they serve a good purpose in other languages but are often misused.


So much this. Been doing this for 25 years and I loathe magic like Exceptions. I consider Go's explicit error handling a plus. Any editor would let you code ife to generate the error scaffold.

I don't hate generics, but I don't require them either. If the Go guys can make them work well and fit into the language i'd be fine with it, but I don't want them to feel pressured into throwing something out there for the language purists. I doubt they feel pressured. They've been around the block and have as good a sense as any group i've ever seen for what to include in the language.

Go is not the language to use if you want to write clever code, but it's great when you write code for businesses that must run 24x7.


> If the Go guys can make them work well and fit into the language i'd be fine with it, but I don't want them to feel pressured into throwing something out there for the language purists.

"Purism" in this situation is to deny every useful feature because your compiler developers aren't capable of implementing them.


As erlang shows, quite the contrary. If you want your code to stay up and running you should concentrate on the happy path, bulkhead and let it crash.

I advice to read about Human Factors and Complex Systems engineering. You will be surprised.


> As erlang shows, quite the contrary. If you want your code to stay up and running you should concentrate on the happy path, bulkhead and let it crash.

If you want crappy software...


in a sense, generics just let you provide parameters to your types, just like you give parameters to functions. it's not all that magical.

Yes, I had read in an article about/by Walter Bright about D, that that was the insight he had, when working on the design of generics or templates for D:

To paramet(e)rize the types (of a class or function or method), just like you parametrize the arguments of a function or method.

Update: searched for and found the article that mentions that:

https://dlang.org/blog/2016/08/30/ruminations-on-d-an-interv...

Excerpt (italics mine):

[ Walter: We nailed it with arrays (Jan Knepper’s idea), the basic template design, compile-time function execution (CTFE), and static if. I have no idea what the thought process is in any repeatable manner. If anything, it’s simply a dogged sense that there’s got to be a better way. It took me years to suddenly realize that a template function is nothing more than a function with two sets of parameters –compile time and run time–and then everything just fell into place.

I was more or less blinded by the complexity of templates such that I had simply missed what they fundamentally were. There was also a bit of the “gee, templates are hard” that predisposes one to believe they actually are hard, and then confirmation bias sets in.

I once attended a Scott Meyers presentation on type lists. He took an hour to explain it, with slide after slide of complexity. I had the thought that if it was an array of ints, his presentation would be 2 minutes long. I realized that an array of types should be equally straightforward. ]


What don't you like about generics?


If you can't handle using generics sensibly then that's not a language problem, it's a you problem.


    > To minimize disruption, each change will require
    > careful thought, planning, and tooling, which in
    > turn limits the number of changes we can make.
    > Maybe we can do two or three, certainly not more than five.

    > ... I'm focusing today on possible major changes,
    > such as additional support for error handling, or
    > introducing immutable or read-only values, or adding
    > some form of generics, or other important topics
    > not yet suggested. We can do only a few of those
    > major changes. We will have to choose carefully.
This makes very little sense to me. If you _finally_ have the opportunity to break backwards-compatibility, just do it. Especially if, as he mentions earlier, they want to build tools to ease the transition from 1 to 2.

    > Once all the backwards-compatible work is done,
    > say in Go 1.20, then we can make the backwards-
    > incompatible changes in Go 2.0. If there turn out
    > to be no backwards-incompatible changes, maybe we
    > just declare that Go 1.20 is Go 2.0. Either way,
    > at that point we will transition from working on
    > the Go 1.X release sequence to working on the
    > Go 2.X sequence, perhaps with an extended support
    > window for the final Go 1.X release.
If there aren't any backwards-incompatible changes, why call it Go 2? Why confuse anyone?

---

Additionally, I'm of the opinion that more projects should adopt faster release cycles. The Linux kernel has a new release roughly every ~7-8 weeks. GitLab releases monthly. This allows a tight, quick iterate-and-feedback loop.

Set a timetable, and cut a release with whatever is ready at the time. If there are concerns of stability, you could do separate LTS releases. Two releases per year is far too short, I feel. Besides, isn't the whole idea of Go to go fast?


(Copying my reply from Reddit.)

>If you finally have the opportunity to break backwards-compatibility, just do it.

I think Russ explained pretty clearly why this is a bad idea. Remember Python 3? Angular 2? We don't want that to happen with Go 2.0.

>Additionally, I'm of the opinion that more projects should adopt faster release cycles.

I am of the opposite opinion. In fact, I consider quick releases to be harmful. Releases should be planned and executed very carefully. There are production codebases with millions of lines of Go code. Updating them every month means that no progress will be made at all. The current pace is very pleasant, as most of the codebases I work with can benefit from a leap to newer version every six months.


I think Python was too ingrained to pull a "Python 3"; Go is newer, and this could be more like a "Swift 3".

Break early and then stabilize for a long time; just make sure everybody knows the plan.


It's barely related to the topic, but another team in my company is thinking about rewriting a pretty large codebase in ObjC into something more sustainable. They briefly discussed Swift, but decided against it exactly due to its relatively frequent breaking changes.

Which emphasises even more the fact that large private codebases really dislike breaking changes.


Except that there should not be many breaking changes in Swift. Nothing on the scale Swift 2 -> Swift 3.

> I think Russ explained pretty clearly why this is a bad idea. Remember Python 3? Angular 2? We don't want that to happen with Go 2.0.

The problem with Python 2/3 is that Python 3 didn't add enough new features to make people want to move to 3.

The problem with Angular 2 is that it just didn't know what it wanted to be.

If Go2 doesn't break enough yet still break it will be no different from Python 2/3 fiasco.

Go has serious design issues, the go team ignoring them hindered Go adoption pace at first place.


Breaking things aren't want makes people want to move; it's a cost, not a benefit. Now, you need to offer benefits to get people to pay the cost for those benefits. The Python 2/3 problem was because there was too much breakage for the perceived benefit (especially early on) for many users, not because there was too little breakage.

Fwiw, I think breaking implies improving. Arguing that breaking doesn't inherently mean improving is.. well, duh. So in the case of many peoples comments here, "not breaking enough" means not improving enough. I know this is an obvious statement, but I feel like you're arguing a moot argument.. i.e., being a bit pedantic.

As an aside, since you're making the distinction, can you have meaningful benefit without breakage? Eg, you're specifically separating the two - so can you have significant improvements without breakage?

It would seem that pretty much any language change, from keyword changes to massive new features, breaks compatibility.


> As an aside, since you're making the distinction, can you have meaningful benefit without breakage?

Sure, in two ways:

(1) Performance improvements with no semantic changes.

(2) New opt-in features that don't break existing code (such as where code using the new feature would have just been syntactically invalid in the old version, so the new feature won't conflict with any existing code.)

There be no reason for SemVer minor versions if you couldn't make meaningful improvements while maintaining full backward-compatibility.


Seems I can't edit my post, but:

I think I'm simply wrong here. I was envisioning "breaking" as being incompatible with Go1. If Go2 was a superset of Go1, it would be allow Go1 code to run flawlessly in Go2 and still allow any new keywords/features.

My assumptions were incorrect, and writing my reply to you sussed it in my head. Thank you for your reply, sorry for wasting your time :)


>As an aside, since you're making the distinction, can you have meaningful benefit without breakage? Eg, you're specifically separating the two - so can you have significant improvements without breakage?

One way is by having a new language feature which does not interact with anything else in the old version of the language, i.e. is orthogonal.


Seems I can't edit my post, but:

I think I'm simply wrong here. I was envisioning "breaking" as being incompatible with Go1. If Go2 was a superset of Go1, it would be allow Go1 code to run flawlessly in Go2 and still allow any new keywords/features.

My assumptions were incorrect, and writing my reply to you sussed it in my head. Thank you for your reply, sorry for wasting your time :)


Adding generics isn't even backwards-incompatible. AFAIK Java 1.5 was just fine with Java 1.4 code, and so was C# 2.0. The latter using reified generics (and not cheating with builtins) lead to the duplication of System.Collections but given builtins aside as of Go 1.9 it has under half a dozen non-generic collections (versus 25 types in System.Collections, though some of them didn't even make sense as generics and don't have a System.Collections.Generic version) that's unlikely to be an issue.

> so was C# 2.0.

True, but C# also has nominal types, overloading, and explicit interface implementation. Adding generics without breaking existing code without those features looks very difficult to me.


They don't have to be Java-ish generics.

Think SML/OCaml/Ada/Modula-3 (modules parameterized by modules) or Haskell (typeclasses).


Actually that would be the easier way. It is also how CLU had them.

What makes it difficult?

They're trying to avoid creating the new Python 3

Go doesn't have const structs, maps or other objects:

https://stackoverflow.com/questions/43368604/constant-struct...

https://stackoverflow.com/questions/18342195/how-to-declare-...

This is a remarkable oversight which makes it impossible to write purely-functional code with Go. We also see this same problem in most other imperative languages, with organizations going to great lengths to emulate const data:

https://facebook.github.io/immutable-js/

Const-ness in the spirit of languages like Clojure would seem to be a relatively straightforward feature to add, so I don't really understand the philosophy of leaving it out. Hopefully someone here knows and can enlighten us!


There doesn’t need to be a philosophy behind leaving it out. Go was started by selecting only features which were deemed necessary. I think it’s fair to assume the creators of Go didn’t design it for writing purely-functional code, so that’s why it’s not in (yet?)

it's almost like writing purely-functional code is not the goal of Go.

I believe part of the reason was also some experience with C++, in which you sometimes have to "unconst" some fields of your const classes (the mutable keyword). This is a really ugly and nonintuitive design, so I assume they'd rather take extra care to make sure they don't have to repeat it. Even if it means no const at all.

I don't think this kind of thing is all that non-intuitive if you reframe const as shared vs. unique references. Rust is a good example of this, although with Go you would want to sidestep all the Cell stuff since it's unnecessary.

> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. […] If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.

Not implementing generics, then suggesting that it would be nice to have examples of generics being used in the wild… You had it coming, obviously.

Now what's the next step, refusing to implement generics because nobody uses it?

> Every major potential change to Go should be motivated by one or more experience reports documenting how people use Go today and why that's not working well enough.

My goodness, it looks like that is the next step. Go users have put up with the absence of generics, so they're not likely to complain too loudly at this point (besides, I hear the empty interface escape hatch, while not very safe, does work). More exacting developers have probably dismissed Go from the outset, so the won't be able to provide those experience reports.


> Not implementing generics, then suggesting that it would be nice to have examples of generics being used in the wild… You had it coming, obviously.

I think you misunderstood. Clearly he meant to ask for examples from the real world that lack generics, but shows how adding them would improve the system.


> Clearly he meant to ask for examples from the real world that lack generics, but shows how adding them would improve the system.

I don't think many such examples will emerge.

First, you have the empty interface escape hatch. It's cumbersome, but it works. The real reason why the repeated use of this escape hatch (instead of proper generics), is lack of compile time type safety, which is replaced by dynamic checks. This introduces elements of dynamic typing that generics could have avoided. This has a cost, which unfortunately is hard to asses.

Second, Go users tolerate the absence of generics for some reason. Maybe they don't really need it, but then they don't have a compelling use case. Maybe they do need them, but didn't realise the limitations of the language would hurt them down the line, but are they competent enough to articulate why generics would be better? They made quite the blunder when they chose Go, after all.

That said, he also wrote this:

> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.

Of course he doesn't: Go doesn't have generics. Go users found other ways, thus proving they didn't really need generics. And generics users don't use Go…


Ok, thanks for the clarification. I wonder if there are situations in big teams in which one team that uses generics can take a look at a large Go codebase within the same company and see what they think? They must have received such feedback at Google, surely?

Most probably. But they might ignore it anyway.

There's at least one high profile example: the standard library itself. It must have been obvious by the time they designed the standard library, before the language was out. They had to have feedback then, just look at the trail of rage against the absence of generics.

They ignored it then —I have no idea why. I'm not sure they'll listen now.


This was when I kind of lost interest, as it became clear generics would never happen.

In the early days I was enthusiastic enough that I tried to provide support for the os.user implementation on Windows.

Nowadays I just advocate Go for C use cases, where the use of a tracing GC is not an hindrance as a way for us to enjoy safer computing.

Oh, and the code of the new syncmap.Map is full of unsafe.Pointer as workaround for lack of generics, while achieving good performance.

https://github.com/golang/sync/blob/master/syncmap/map.go


As much as I want them to fix the big things like lack of generics, I hope they fix some of the little things that the compiler doesn't catch but could/should. One that comes to mind is how easy it is to accidentally write:

  for foo := range(bar)
Instead of:

  for _, foo := range(bar)
When you just want to iterate over the contents of a slice and don't care about the indices. Failing to unpack both the index and the value should be a compile error.

I've got caught by this more than once. Again, implicit behavior sucks so I suggest they get rid of the first form.

Here's a toy version of a real bug that I wasted a bit of time debugging which was due to this behavior: https://play.golang.org/p/6pBUPBTTvj

Would be cool if you could consider doing a writeup about it, and linking it on the mentioned wiki page.

If `bar` is a slice then the short syntax isn't that useful (although it's shorter than the three-clause for loop equivalent).

But if `bar` is a map or a channel, then that short syntax is very handy.

For comparison's sake, you don't see many PHP users complaining about the difference between `foreach($bar as $foo)` and `foreach($bar as $_ => $foo)`.


I'd also be fine with it if they switched the ordering of the tuple so that only unpacking one thing gave you the element instead of the index. My point is mainly that the behavior you get currently with a slice is not what you want 9 times out of 10.

I like Go concept: very simple and minimalistic language, yet usable enough for many projects, even at cost of some repetition. Generics are not a concern for me. But error handling is the thing I don't like at all. I think that exceptions are best construct for error handling: they are not invasive and if you didn't handle error, it won't die silently, you have to be explicit about that. In my programs there's very little error handling, usually some generic handling at layer boundaries (unhandled exception leads to transaction rollback; unhandled exception returns as HTTP 500, etc) and very few cases when I want to handle it differently. And this produces correct and reliable program with very little effort. Now with Go I must handle every error. If I'm lazy, I'm handling it with `if err != nil { return err; }`, but this style doesn't preserve stack trace and it might be hard to understand what's going on. If I want to wrap original error, standard library don't even have this pattern, I have to roll my own wrapper or use 3-rd library for such a core concept.

What I'd like is some kind of automatic error propagation, so any unhandled error will return from function wrapped with some special class with enough information to find out what happened.


Generics have never stopped me from building in Go... But without them I often do my prototyping in python, javascript, or php.

Working with batch processing I'm often changing my maps to lists or hashes multiple times during discovery. Go makes me rewrite all my code each time I change the variable type.


It's weird, I see these complaints so often and I just.. don't.. get it. I'm not sure what I do differently, but I'm just so used to Go's method of non-generic that I don't run into frustration at all.

The only time I even notice it is if I have to write a method like `AcceptInt(), AcceptString(), AcceptBool()` and etc.

I enjoy generics in Rust, I'm just not sure what I'm doing differently in Go that causes me to not miss them.


It's possible your data (and types) are short lived.

When I'm doing text processing for example, I pass the same strings/dicts/hashes through dozens of functions for cleaning, sorting, organising, benchmarking, comparing, etc..

It's not just in->save->out CRUD work.


I'm actually coding a LITTLE project with 7 tables and thinking if Golang was the right choice.. but I picked Golang because there is some async realtime component. My pace of dev is very slow.

What aspect is causing it to be slow for you? Note that there are definitely some areas of Go I find terrible, and SQL is one of them. Check out the SQLx library, it's far less painful than the stdlib SQL is.

Error handling, avoiding nulls, writing queries...

The beauty of Go is that you get developer productivity pretty close to Ruby/Python levels with performance that is similar to Java/C++

Improvements to package management is probably the highest item on my wishlist for Go 2.


There is so much boilerplate you have to write that productivity drops considerably, both because you have to re-re-re-re-implement things (or use code generation) and because you need to scan lot's of redundant code before making sense of it.

The alternative is often to just use interface{}, which hurts performance and type safety.


This has so far been a theoretical concern for me only. If you want to write reusable data structures you obviously need generics. But none of the code I've written involves reusable data structures, so I can't say I really miss generics.

This is countered by having a much smaller footprint for the language / SDK in your brain. I easily write code twice as fast in Go as I do in Swift.

> Improvements to package management is probably the highest item on my wishlist for Go 2.

The work is in progress and will most likely be available for general purpose use way before Go2.

Check it out: https://github.com/golang/dep


That's going to happen before go 2. Go 1.9 / 1.10 time frame.

As a ruby and go dev, I'm a bit sad to see backward-compatibility going. Thinking I could write code with minimum dependencies and that would just work as is years later was really refreshing compared to the high level of maintenance needed in a ruby app.

But well, I trust the core team to make the best choices.


How about fixing all the GOPATH crap?

We've worked around it like this and it works pretty well: https://github.com/getstream/vg

Thanks for this. For the first time in my six months using Go the 'peek definition' and 'goto definition' options work correctly!

I'm so undecided on using GOPATH like that. Though, I may just do it.

Personally I hate Go's imports, and I think the GOPATH is a terrible, terrible idea. Yet, I quite like the vendor directory, and I expect the new `dep` tool to be the final touch.

Now my only problem is that I often forget which projects of mine are in my GOPATH, so I'm afraid to delete them -_-


vendor has big problems too. There are some proposals on how to start fixing them, after the dep tool lands (originally slated for 1.7 or 1.8 but still in Alpha?)

To support the uncommon edge case, src/foo/vendor/dep is completely different to src/bar/vendor/dep.

Your src/foo code can't pass src/foo/vendor/dep types to src/bar code expecting src/bar/vendor/dep types. Even if the deps are identical checkouts. Code can become uncompilable by the act of vendoring a dependency.

You have two completely different sets of globals. Code can stop working by the act of vendoring a dependency.

The initialization runs twice. So well known packages like glog will panic if two packages has it vendored because they both try to update settings in the standard library.

GOPATH would be preferable, but none of the tools for managing packages at that level support pinning. So you dare to be developing more than one package at a time, you end up with a single monolithic repository representing your GOPATH and pin your dependencies in src/, or end up with symlink hacks and other unmaintainable tricks.


Oh I definitely agree there, I didn't mean to say it was flawless. When I wrote that, I was more thinking of the UX of vendoring.

Eg, `import "foo/lib"` pulls seamlessly from vendor, which is a really nice UX if people want to vendor.

With that said, I still think a proper package manager Ala Rust Cargo.toml will be better. Here's hoping Go's `dep` solves this crap basket :)


If you look two parents up you see my colleague posted virtualgo (https://github.com/GetStream/vg). That's the tool I built at my company support GOPATH and version pinning (which is forwarded to dep). You can use it to solve the exact issue you're describing by using "vg uninstall packageName".

I think 1.8 addressed the issue.

https://golang.org/doc/go1.8#gopath


"The GOPATH issue" is the requirement that I keep my source code checkouts at any particular path, in any particular structure, to be able to use the toolchain.

Solving the GOPATH issue would be letting me "git clone" a Go project to a directory of my choosing and still use the tools.


yeah, I don't fully understand this. I've had the least problems with GOPATH than I have with any other language. You literally set GOPATH=${HOME} and it just works.

I'm also curious as to what issues you have come across with GOPATHS

Multiple unrelated Go projects in different repos located in different directories.

No problem at all with very other language, except with Go - not possible!

I want to open cmd.exe (Win) or bash (Linux) and cd to the directory and build. And not configure a GOPATH in the environment variables of the current user or pretend it to the batch script, set it in IntelliJ Go plugin, etc. It so unflexible, so arkward, and there is no reason for this crap at all. So please fix it for Go 2. Thx


This is the sort of thing that makes it hard to get started with Go in my opinion.

It shouldn't be. Set the GOPATH once and use any text editor.

Except you have to set it every time you open a shell. You can’t put it into your .bashrc either unless you only ever work on one project. This workflow sucks:

    $ cd projects/foo
    $ export GOPATH=$PWD
    $ cd src/github.com/company/foo
    $ go build

I use one GOPATH and work on dozens of projects. I would hate it if I had to change GOPATH often. In Go, if you find yourself fighting the system, chances are you are doing it wrong.

'go get github/user/project && cd $GOPATH/github/user/project'.

And to avoid dependency issues, I use one of the many vendoring tools. Currently 'govend'.


You should try out https://github.com/GetStream/vg, it handles managing of multiple gopaths in such a way that cding to a directory switches you automatically.

OP was talking about getting started. I also don't understand why you can't use the same GOPATH across multiple projects, especially with godeps?

Multiple unrelated Go projects in different GIT/SVN repositories. GOPATH experience is painful.

make still works. I don't go build, I make build. Yes, you have to set it up for each project, but only once.

Give me cargo for go either way.


I still find it difficult to install and use Go.

Huh. It was dead simple for me.

++

Now we have at least vendor. But this alien-style fixed paths is really annoying - no other language forces such GOPATH crap. Please remove this limitation from Go 2!


What exactly is wrong with GOPATH? I find it much easier to reason about than imports in say Ruby or Python, which are global names with no hierarchy and are resolved in much less obvious ways.

There's no obvious way to separate personal work, from work work, from exploratory work, from any other way you want to categorise your source directories. You just have to chuck them all in the same root and hope you can remember what each project is for.

Uhh, just keep separate gopaths? I keep a separate go path per project.

That is the reason why GOPATH is an environment variable: so you can change it easily, you own way.

What? Manage it the way you'd manage any other environment variables, like AWS_SECRET_ACCESS_KEY. That's exactly what a bash environment is for.

You want to manage GOPATH as if it were a secret key!? Not ever storing it in repos, having weird dotfiles storing them locally, having to set up a keystore cluster like consul in production? Yuck.

The point is it's a per-project config. And yeah, there's no reason to store "/home/artur/go" in my git repo - that wouldn't work for my coworkers whose names are not Artur.

The thing I dislike is how such languages try to abstract away the filesystem representation of packages/modules/libraries/etc.

The conventions and resolution approaches that are needed to accomplish this often end up being opaque, less flexible, and harder to work with.

I much prefer the approach of C, C++, and even PHP to some extent, where the interaction with the filesystem isn't hidden.

When using C or C++, it's trivial to reference a globally-installed library, such as the standard libraries of such languages. It's also simple to reference third-party libraries, or even to include local copies of third-party libraries within one's own source tree. Relative or even absolute paths can easily be specified when referencing external code.

This also gives so much more flexibility with regards to how a project is laid out, and how it pulls in other code or libraries it depends on. I can follow an approach that works for me, for my project, for my team, for my version control system, for my development environment, for my deployment environment, and so on.

I want the language to conform to my needs. I don't want to have to modify my behavior and my environments to conform with what the language's developers deem to be the "right way" of doing things, especially when this isn't compatible with my needs.

Maybe this means there's slightly less consistency with how libraries and dependencies are handled across projects and libraries, but that's a cost that I'm willing to pay, and in practice it actually isn't that much of an issue when using languages like C and C++.

I'd much rather spend a few seconds specifying "-I" and "-L" and "-l" options when compiling or linking than trying to remember a bunch of conventions or how the high level package names end up mapping to the installed modules or libraries.

I'm not saying that the C approach is perfect, or that I'd want other languages to use a C preprocessor like approach of actually combining separate source files just prior to compilation or even execution.

But I would really like it if modern languages didn't try to hide the existence of files and directories so much, and didn't try to force conventions on me. I'd rather deal with the very small cost of explicitly telling the compiler where to find dependencies rather than relying on conventions or opaque resolution processes.


> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.

Collections?


But I always heard never to use Go 2! :P

"We estimate that there are at least half a million Go developers worldwide, which means there are millions of Go source files and at least a billion of lines of Go code"

Perhaps fewer than a billion lines of Go code if there were generics.

I must say that whenever there is a discussion about the merits of the Go programming language, it really feels hostile in the discussion thread. It seems that people are seriously angry that others even consider using the language. It is sort of painful reading through the responses which implicitly declare that anybody who enjoys programming with Go is clueless.

It also really makes me wonder if I am living in some sort of alternate reality. I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit) have no idea what a reified generic is. I have run into some problems where being able to define custom generic containers would be nice, but I don't feel like that has seriously hindered my ability to deliver safe, functional, and maintainable software.

What I appreciate most about Go is that I am sure that I can look at 99% of the Go code written in the world and I can understand it immediately. When maintaining large code bases with many developers of differing skill levels, this advantage can't be understated. That is the reason there are so many successful new programs popping up in Go with large open-source communities. It is because Go is accessible and friendly to people of varying skill levels, unlike most of the opinions expressed in this thread.


>What I appreciate most about Go is that I am sure that I can look at 99% of the Go code written in the world and I can understand it immediately.

No you really can't. You can look at any given line of code and tell me what it does, where as it might take more time to parse any given line of Haskell, Scala, or OCaml longer.

However, expressivity in a large application pays a dividend tenfold, because the main challenge of reading code is not any given line, it is understanding the application architecture, the frameworks, the data flow and how it all works together.

> That is the reason there are so many successful new programs popping up in Go with large open-source communities.

Successful, or popular? Just look at Docker. Certainly popular, but check the bug tracker or talk to anyone with real world experience using it at scale and you'll be hearing a completely different story.


I think the negative attitude comes more from frustration than from anything else.

> I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit) have no idea what a reified generic is.

I didn't know what it was called either until I saw them mentioned here, but now that I know about it, I get why it would be useful. Before that, I kind of assumed all languages with generics would also allow you to access the type information at runtime.

> I have run into some problems where being able to define custom generic containers would be nice, but I don't feel like that has seriously hindered my ability to deliver safe, functional, and maintainable software.

I don't mean any disrespect, but this is a very good example of the Blub Paradox: http://wiki.c2.com/?BlubParadox


> I don't mean any disrespect, but this is a very good example of the Blub Paradox: http://wiki.c2.com/?BlubParadox

Interesting read, and I admit there is some of that. But my point isn't to say that those features aren't useful or powerful, but rather that with the constraint of working with a large group of programmers of varying skills, simplicity has more value than power (as long as we can deliver software that meets our requirements). It is similar to point 4 in the "Problems with the Blub Paradox" section.


One of the problems is who decides what is too simple. E.g. why are for loops, function calls and lamdbas considered simple enough to be in go, but generics aren't? When I TA'ed CS introductory courses, student usually had a lot less trouble with understanding generics than lambdas.

Not including feature X in programming language Y will also ensure that no-one who primarily uses Y will ever come to understand or appreciate X.

The blub paradox basically represents the opinion that it is always better for languages to include more powerful features.

I think that goes too far, working primarily in Scala I am seeing firsthand how much diminishing returns extra complexity in the language can have, but I'm certainly of the opinion that Go leans too heavily toward handicapping the language in pursuit of simplicity.


The complexity from generics isn't from the conceptual standpoint, it's from the resulting code standpoint... for the same reason that the ternary ?: operator is "too complex": it's really easy to say that `foo := cond ? 1 : 0` is better than the if/else alternative, but `foo := (a > (b.Active() ? Compare(b, a) : useDefault() ? DEFAULT : panic()) ? "worked" : "failed"` is a mess waiting to happen.

Same with generics. It's easy to point to simple examples. It hard to ensure that terrible metaprogramming monstrosities don't arise.

It's possible to write bad code with go as it is, but it's actually difficult to make such bad code inscrutable.


Maybe, but the solution to this is not to say: Only the compiler implementers are smart enough people to not create messy code with generics, only a few built-in data types are allowed to be generic and the rest will have to copy-paste implementations or use lose type information through `interface {}`

FYI, reified generics undermine type safety and correctness in general, which is one of the reasons why so few languages support the concept (another is that erased generics allow for easier interoperability and for the creation of a much wider class of static type systems).

In my experience, most people who think they need reified generics don't really understand them and can actually do fine with much safer concepts expressed on an erased runtime.


This made me think about Pony and its capabilities system.. Also of Haskell and all the rmap, lmap, rfold, lfold, rmagic, lmurloc.. :P

It is because some people here who have no stake in Go programing seem extraordinarily angry or maybe just concerned that why the hell Go authors didn't just override Go programmers' and listened only to enlightened PL experts here and at Reddit.

If it were a topic of gender or race instead of programing language I am sure this behavior would be called out as mansplaining/whitesplaining.

I do wish some term like 'PLsplaining' be used for this infuriating behavior of calling Go/PHP or whatever programmers clueless.


Having used C#, C/C++, Ruby, Perl, PHP, and Javascript, I find golang to be one of the most refreshing approaches to programming among them all. I think many people are finding themselves feeling upset because they are convinced that the simplicity of golang implies only idiots use it which then must mean that the language will be a failure (almost at an a priori level).

I don't think anyone has been arguing against it's ease of use nor the ease of readability. Most people seem to be against it's ergonomics and intentional lack of features.

I like Go and it's intentional lack of feature or very slow speed of adding features is my favorite feature. May be I am a pseudo programmer like most people who dislike Go would probably say.

I'm not saying it's a good language or not. I haven't written in it enough to determine that. I know enough to say that anyone who says it's not a "Real Programming Language" is full of it.

What if those two are related? ;)

To be fair, forcing \t on people was a douchebag move.

> It also really makes me wonder if I am living in some sort of alternate reality. I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit)

What are you and your co-workers using aside from Go?


Mostly C and C++. Personally I am working in a C++ codebase right now and I struggle with all sorts of finicky cross-compilation and linking problems that just wouldn't exist in Go.

We also do some Python and Javascript for scripts and web work.


"Go 2 considered harmful" - Edsger Dijkstra, 1968

Underrated comment. :)

Haha, thanks.

If there's a Go2 there will be a lot of these headlines for blog posts/rants.

It doesn't matter if it is true or not, but that phrase is wonderful.



about generics : i've never had a deep look at it but i've always wondered if most of the problem couldn't be solved by having base types ( int, string, date, float, ..) implements fundamental interfaces (sortable, hashable, etc). i suppose that if the solution were that simple people would've already thought about it.

in particular, i think it could help with the method dispatch, but probably not with the memory allocation ( although go already uses interfaces pretty extensively).


I would love to see uniform-function-call-syntax.

Turning: func (f Foo) name() string

Into: func name(f Foo) string

Callable like this: f.name() or name(f)

Extending foreign structs from another package should be possible too, just without access to private fields.

Other than that, if-as-expression would be nice to have, too.


Sorry, but if "a major cloud platform suffers a production outage at midnight" is the bar for effecting change in Go, then I want no part of it.

Regarding the lacking of generics problem, is there a way to get around it, there are always plenty of tools doing that, if the IDE can patch the syntax and support certain kind of prigma to generate the template code, then the problem is almost solved, not sure if it'll cover all cases like Java does though.

> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.

This is sampling bias at work. The people who need generics have long since given up on Go and no longer even bother participating in Go-related discussions, because they've believe it will never happen. Meanwhile, if you're still using Go, you must have use cases where the lack of generics is not a problem and the existing language features are good enough. Sampling Go users to try and find compelling use cases for adding generics is not going to yield any useful data almost by definition.


> no longer even bother participating in Go-related discussions, because they've believe it will never happen

/raises hand

I like when tools are good, but I've basically written off Go as a tool for generating unsustainable code right now (and a big part of it is the odious options, either type-unsafety or code generation, for things that are trivially handled by parametricity). If things change, I'll be happy to revisit it, but the described thought process just ignores my existence. And that's fine from my perspective, because I don't need Go, but I'd still like it to be better in case I do need to use it.


I think the Go team would still like to understand your production uses that caused you to write off Go. What is your problem domain? How would you accomplish that in Go? How did you actually solve your problem with a different language?

For me, user provided data structures are the only thing that comes to mind (sync.Map for example) in my production use of Go. But even then, my pain (time spent, errors made) due to a lack of generics is low.

You may have written off Go. But if you're willing, I bet the Go team would still like to read your production code use-cases that caused you to do so.


My "problem domain" is "good, correct code". I write code in many different spaces, from web software to mobile apps to (a lot of) devops tools to (less these days) games. My criticism of Go isn't "it's not good at code for my problem domain," it's "it's not good at code for your problem domain, either, and your workarounds shouldn't need to exist."

User-provided data structures are a big one. (A language where I have to copypaste to have typesafe trees is a bad language.) But, beyond that, I build stuff to be compiler-verified. Here's a trivial example that I did yesterday that I straight-up can't do in Go, but did in C# as part of a object graph rehydration routine (where object A required pointers to object B from a different data set, and since it's 2017 we aren't using singletons so the JSON deserializer takes in converters that look up objects based on keys in object A's JSON representation):

   public interface IKeyed<T> { T Key { get; } }
Just being able to do that on an object is powerful. C# has reified types; I know what T is at runtime. (I can't specialize on the type in C#, but I can fake it.) But I also know what it is at compile-time, and so I can't accidentally pass `IKeyed<String>` and `IDictionary<Int32, DependentObject>` to the same function because type constraints, yo.

I don't really care about fast code, because computers are all future-computers-from-beyond-the-moon. If I'm using a statically-typed language, I care about correct. Duplicated code is code that will--not may--lead to bugs. State I have to maintain in my head (like "this is an int map, talking to an int-keyed object set") is state that will--not may--lead to bugs.

If you're a statically-typed language that isn't helping me avoid bugs, you might as well not exist. You don't have to be whomping around stuff like Scala's Shapeless--I would argue that you shouldn't--but you have to provide at least a basic level of sanity and the difficulty of expressing stuff outside of parameterized types (even when there are workarounds) makes it not worth the hassle. I'll cape up for damned near everything in at least some context, from dynamic languages like Ruby and ES6 to Java (well, Kotlin) or C# to Modern C++ to Scheme or a Lisp. My no-buenos on programming languages are limited pretty exclusively to C and Go because both languages encourage me, encourage all of us who use them, to write bad code.


I'm new to Go and actually and don't do professional IT work but I immediately felt the need for having a type that allowed me to store any type.

My use case was/(still is) writing a spreadsheet where a user can enter different stuff in a cell and I want to store the value in an underlying data type. I now ended up storing everything as string because I couldn't figure out an efficient way to implement it.

My goal would have been to have one generic type that can store the cell types text and number which I can store in an array. This generic type could then implement an interface method like format() which calls a different format method depending on the type the cell has.

I played around with interface and reflect.TypeOf but lost too much performance compared to just storing everything in a string. Strings on the other hand now fill up my memory with stuff I don't need - at least that is my impression.

I don't have programming experiences so maybe I misunderstand the discussion or just overlooked a way to solve my issue in an efficient way. So sorry if the example I mentioned something easily doable in go.


> My goal would have been to have one generic type that can store the cell types text and number which I can store in an array.

FWIW generics wouldn't help you with that, "sum types" would. A sum type is a souped-up enum which can store associated data alongside the "enum tag", so you can have e.g.

    enum Foo {
        Bar(String),
        Baz(u8, u32),
        Qux,
    }
and at runtime you ask which "value" is in your enum:

    match foo {
        Bar(s) => // got a Bar with as string inside
        Baz(n, _) => // got a Baz, took the first number
        Qux => // Got a Qux, it stores no data
    }
(and in most languages the compiler will yell at you if you forget one of the variants).

So for your use case you'd have e.g.

    enum Value {
        Boolean(bool),
        Integer(u32),
        String(String),
        // etc…
    }
> I played around with interface and reflect.TypeOf but lost too much performance compared to just storing everything in a string.

Maybe try a struct with an explicit tag rather than reflection? e.g.

    type ValueType int
    const (
        TYPE1 ValueType = iota
        TYPE2
        TYPE3
        // … one for each concrete type you want to wrap

    )
    struct Value {
        type ValueType
        value interface {}
    }
and then you can

    switch value.type {
    case TYPE1:
        value.value.(Type1)
    case TYPE2:
        value.value.(Type2)
    // etc...
I'd expect reflect.TypeOf to be very expensive, this is a check + a cast.

Thanks for that information. I had thought enums can only be used as named types. Good to know they can hold data as well. I had though of that struct as well (ok, not in that professional way with types as constants and iota:-)) but found it a bit annoying that I have to store a type myself although the type itself should already have the type information somewhere itself - consequently that information is stored reduntantly. But I'll definitely try that. Thanks.

> I had thought enums can only be used as named types. Good to know they can hold data as well.

That depends on the language, and enums being sum types also depends on the language.

* Rust and Swift enums are sum types (they can hold data and every variant can hold different stuff), there is also a ton of (mostly functional) languages with sum types not called enum: Haskell, F#, OCaml, … there are also languages which replicate them via other structures (sealed classes in Kotlin, case classes in Scala).

* java enums (a bare "enum" is the C one) can hold data but every variant must hold data of the same type so you'd need up/down casts

* C++, C# and C enums can't hold data

Go does not have enums.


C++17 has support for sum types via std::variant.

Minor nitpick: case classes in Scala are product types, not sum types. I think you meant "sealed traits".

The combination of both really.

FWIW, "enums" are in most languages you'll run into somewhat distinct from "enum types".

I kind of think the same way, but thanks to Docker and K8s success, it means we might have to deal with Go code regardless how we think about it.

Maybe. My intuition is that k8s is going to lose its luster once people actually have to do a little math related to its costs; while I think there are real reasons for something like it in on-prem environments, I think the cloud-in-your-cloud-so-you-can-cloud-while-you-cloud approach currently being rolled out is profoundly unwise. (Which is to say: k8s is functionally something to weld together to make an OpenStack alternative--such as it is--rather than a layer to plop on top of one.)

Docker...yeah. We're stuck with it there. And their historical security posture doesn't make me super excited, but...yeah.

I hate when you're right. But you usually are.


We have a similar viewpoint w.r.t Higher Kinded Types in F#. Much of the motivation for HKTs comes from people wishing to use them how they've used them in other languages, but we've yet to see something like this in our language suggestions or design repositories:

"Here is a concrete example of a task I must accomplish with <design involving HKTs>, and the lack of HKTs prohibits me from doing this in such a way that I must completely re-think my approach.

(Concrete example explained)

I am of the opinion that lacking HKTs prevents me from having any kind of elegant and extensible design which will hold strong for years as the code around it changes."

Not that HKTs and TypeClasses aren't interesting, but without that kind of motivation, it's quite difficult to justify the incredibly large cost of implementing them well. And that cost isn't just in the code and testing of the compiler. This cost also surfaces in tooling, documentation, and mindshare. It could also have an unknown effect on the ethos of F#, which for better or for worse, does bring with it a bit of a somewhat intangible quality that many people like.

Personally, I think Golang should implement generics. But I'm sympathetic with the views expressed by Russ Cox here. And I don't think it's just sampling bias.


This is apples and hand grenades, though. You're looking for a use case for HKTs. I just want to make a tree without copy-pasting.

Fair point.

This is a fair point. OTOH, just punting entirely seems like the wrong reaction.

Rust, for example, has been thinking about HKTs for a while, and they might not fit well in the language. Rust wants to solve the problems that HKTs solve, and it looks like the solution is converging to ATCs (associated type constructors).

It's taking a while, but Rust is newish, and it isn't taking anywhere near as long as Go is taking to get generics.


It's the same thing in principle, but it's one of those cases where details matter a lot. Generics have been mainstream - as in, implemented in mainstream programming languages used to write millions of lines of production code - for over a decade now. At this point, someone claiming that they need specific user scenarios to convince them that generics are worthwhile, or that they aren't aware of a "good enough" way to implement them, beggars belief.

I think you're mostly right. Doesn't it make sense then to ask for help correcting that bias?

I have used generics for quite long. When I used it I loved it. Then I moved to a different language for a few years, and lost generics. Now I have written large programs in GoLang too - and frankly I do not miss out on generics at all. Now, I also find reading code with generics isn't straightforward either - it requires a slightly complicated mental model.

So what's happening here? The parent made a reasonable comment, without flaming anybody. All his children give evidence for him (except one but they weren't able to downvote) and yet the parent is downvoted. Votes don't matter but I'm curious about drive-by-downvoting. What does that signify? Fanboyism?

Yes, using empty interfaces and adding common data structures to the runtime eliminate the primary use-case of generics. Who would have thought..

The leap second problem reminds me of this post[0].

[0] https://news.ycombinator.com/item?id=14121780


This was announced at GopherCon today. FYI, if folks are interested in following along other conference proceedings, there is no livestream, but there is an official liveblog: https://sourcegraph.com/gophercon

It's on twitch I believe

Disclaimer: I mean this with love

This post really frustrates me, because the lengthy discussion about identifying problems and implementing solutions is pure BS. Go read the years worth of tickets asking for monotonic time, and see how notable names in the core team responded. Pick any particular issue people commonly have with golang, and you'll likely find a ticket with the same pattern: overt dismissal, with a heavy moralizing tone that you should feel bad for even asking about the issue. It's infuriating that the same people making those comments are now taking credit for the solution, when they had to be dragged into even admitting the issue was legitimate.


> asking for monotonic time

His anecdote about Google/Amazon using leap smears to solve the problem is telling. I suspect that they were unable to see outside their own bubble to think about how others may be impacted.

> We did what we always do when there's a problem without a clear solution: we waited

The original problem described the issue and the potential consequences very well and the problem didn't change between the initial report and when cloudflare hit it. It was only until a Serious Industrial User (to borrow a term from Xavier @ OCaml) got bit in a very public way that they actually began thinking about what a clear solution would look like.


> We did what we always do when there's a problem without a clear solution: we waited

And this is exactly why the Go designers don't understand language design, and how this ignorance shines through every single place in their language.

Language design is about compromises. There is never a perfect solution, only one that satisfies certain parameters while compromising others. You, the designer, are here to make the tough choice and steer the language in a direction that satisfies your users.

Besides, characterizing their position as "We waited" is very disingenuous. First of all, this is more stalling than waiting, and second, waiting implies they at least acknowledge the problems, which the Go team famously never does. Read how various issues are answered and then summarily closed with smug and condescending tones.


You are being deliberately negative here. Choosing to forego generics in favor of simplicity (and its impact along several axes) is a postcard example of a compromise. It is a tough choice that many people will be unhappy with, but there are also many Go programmers that are extremely satisfied with that direction.

As for acknowledging, well, they have always been very clear about their position. It makes no sense to spend a decade answering the same question over and over with a long and elaborate response which the person asking has already seen and dismissed. I can understand them becoming condescending after a decade of facing people who act with a deliberately obtuse and holier-than-thou attitude.

It's not like they have been lazy - every release of Go has had a large amount of improvements that matter. Working on generics would have meant sacrificing a (probably large) amount of them.

(for the record, I dearly miss generics too!)


Regarding the leap second bug, I suspect this is an example of perfect being the enemy of the good.

It appeared to me that the golang devs believed so strongly in the superiority of leap second smearing that waiting for everyone to adopt it was better than compromising their API or the implementation of time.Time.


Given Google's orientation towards server-side web applications, it makes sense. On the other hand, the real-time OSs such as QNX have had monotonic clocks available for decades, and they use them for all delay and time interval measurements. (In real-time, you use the monotonic clock for almost everything that matters, and the day clock for logging and display only. You don't want the clock used for delays and time intervals to "smear"; it might have physical effects if it's being used to measure RPM or something and seconds got slightly longer.)

Go is great for server-side web applications. The libraries for that are all there and well debugged. Anything else, not so much.