> FWIW I think you are giving Go's compiler far too much benefit. Why risk it > not getting optimised when passing a pointer will definitely not copy > anything. And who's to say it won't optimise that, too? How much experience do you have with compiler back-ends? Compilers improve over time, and absent a very direct immediate need you're better off letting them optimize your code better in the future, than trying to clamp down on every risk of something not getting optimized now — and interfering with both future maintenance and future optimization. This is no different from when some people kept using "goto" because loops in the new structured-programming style were slower. Soon the compilers were tiling, fusing, fissioning, strip-mining, unrolling, and parallellizing the structured loops to stupendous effect... but skipping the parts with goto in them because those were icky and potentially hard to figure out. Also, if you study compiler optimization you will find that passing a pointer definitely can cause excessive copying, and other costs, because of the aliasing problem. It just doesn't fit into a naïve, 1970s mental model of how compiled code works. Go may be in many ways a 1970s language, but much of this is now standard fare even for a young compiler. Plus there's a gcc front-end for Go, and gcc definitely has mature optimizations. Where do the costs come from? Seemingly everywhere. Every time the compiler wants to access a variable, it needs to re-read that variable from memory — unless it can rule out aliasing, in which case it can registerize. Reloading the variable on demand every time is a form of copying, without the efficiencies of a full object copy. And if the access changes the variable, the same goes for writing the new value back to memory afterwards. The register allocator gets less freedom to make optimal choices. Trivial operations can't be optimized away if they put an object through intermediate states that may be observed through aliasing. The scheduler is much more constrained, which also eats away at the benefits you get from inlining. Escape analysis is less successful, so more objects need to be allocated on the heap that could have been allocated on the stack and/or registerized. More objects on the heap means more work for the garbage collector, more lock contention in the memory allocator, less cache locality. You ask who's to say that the compiler won't optimise away pass-by-reference. Nothing obscures the compiler's view of what happens to an object quite like passing its address around. That's what usually creates aliases. It's harder for a compiler (and as I said, technically incomputable) to prove that aliasing is harmless in a given case, than it is to optimize away an object copy in cases where there is no aliasing in the first place. The two are not symmetrical. Maybe it's best if I give you an example. Imagine you're the compiler. Consider this pseudocode. (If it looks silly, imagine it being the outcome of earlier optimization stages such as inlining. Imagine the computations being more interesting. Imagine much of this happening in a performance-critical loop. Anything to keep you awake.) type Computation struct { Increment int } func Compute() int { c := Computation{Increment: 1} total := 0 Foo(&c) total += c.Increment Bar() total += c.Increment Blub() total += c.Increment return total } How do you optimize Compute()? You might think you can constant-propagate c.Increment: Foo(&Computation{Increment: 1}) Bar() Blub() return 3 Great. Prime candidate for inlining, and further optimization from there. It might end up being just a few instructions. But wait! Foo() takes its Computation by reference. What if Foo() modifies c.Increment? We can't just optimize the counting away. We could if Foo() had taken a copy of c, but not while it takes a reference. So how about this? c := Computation{Increment: 1} Foo(&c) Bar() Blub() return 3 * c.Increment Still pretty decent, right? Except you don't know where Foo() *put* your pointer to C. Bar() and Blub() *could* be modifying c through a pointer that they got from some implicit state. In this example a global variable; in the real world it might also be a chain of pointers reachable from arguments you pass to these functions. And so the best we can do is... c := Computation{Increment: 1} Foo(&c) total := 1 + c.Increment Bar() total += c.Increment Blub() return total + c.Increment Damn. Nothing but a few worthless peephole optimizations. We get just one meager constant propagation. Plus we have to allocate c on the heap and garbage-collect it later. All because Foo() takes its Computation by reference. If you're going to put the compiler through this, you had better mean it! Passing a pointer says "I need this reference for unspecified dirty business and you'd better not perform any optimizations that might interfere with that."