Merge lp:~axwalk/juju-core/lp1235717-refactor-simplestreams-writemetadata into lp:~go-bot/juju-core/trunk
- lp1235717-refactor-simplestreams-writemetadata
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Andrew Wilkins |
Approved revision: | no longer in the source branch. |
Merged at revision: | 1988 |
Proposed branch: | lp:~axwalk/juju-core/lp1235717-refactor-simplestreams-writemetadata |
Merge into: | lp:~go-bot/juju-core/trunk |
Diff against target: |
896 lines (+427/-142) 10 files modified
cmd/jujud/agent_test.go (+1/-1) cmd/plugins/juju-metadata/toolsmetadata.go (+21/-2) cmd/plugins/juju-metadata/toolsmetadata_test.go (+16/-14) environs/sync/sync.go (+1/-1) environs/testing/tools.go (+17/-7) environs/tools/simplestreams.go (+132/-82) environs/tools/simplestreams_test.go (+226/-15) environs/tools/tools.go (+1/-6) testing/targz.go (+8/-10) worker/upgrader/upgrader_test.go (+4/-4) |
To merge this branch: | bzr merge lp:~axwalk/juju-core/lp1235717-refactor-simplestreams-writemetadata |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+189767@code.launchpad.net |
Commit message
Fix tools metadata generation for null provider
There's two functional changes here:
- If there's existing tools metadata, use it
if it has sha256/size in it, and the tools
haven't been replaced.
- Don't use net/http or the tools URL when
fetching tools to compute sha256/size. That
doesn't work with sshstorage.
- Don't fetch existing tools to resolve metadata
during normal tools syncing operations. Only
the "juju metadata generate-tools" command
will do this now.
Additionally, I have refactored
environs/
intent of WriteMetadata (now MergeAndWriteMe
more obvious. That function is now composed of
several other more specific functions.
Fixes #1235717
Description of the change
Fix tools metadata generation for null provider
There's two functional changes here:
- If there's existing tools metadata, use it
if it has sha256/size in it, and the tools
haven't been replaced.
- Don't use net/http or the tools URL when
fetching tools to compute sha256/size. That
doesn't work with sshstorage.
- Don't fetch existing tools to resolve metadata
during normal tools syncing operations. Only
the "juju metadata generate-tools" command
will do this now.
Additionally, I have refactored
environs/
intent of WriteMetadata (now MergeAndWriteMe
more obvious. That function is now composed of
several other more specific functions.
Fixes #1235717
Andrew Wilkins (axwalk) wrote : | # |
Andrew Wilkins (axwalk) wrote : | # |
Please take a look.
Roger Peppe (rogpeppe) wrote : | # |
This looks great in general, with a few remarks and suggestions below.
https:/
File cmd/plugins/
https:/
cmd/plugins/
tools.MergeAndW
tools.ResolveFl
I'd prefer to see explicit values for the resolve flag here - using it
as a boolean gets around the advantages of using the named constants, I
think.
https:/
File environs/
https:/
environs/
s/metadata/size and hash/ ?
https:/
environs/
if md.Size != 0 {
continue
}
to save an indent level?
https:/
environs/
oldMetadata []*ToolsMetadata) []*ToolsMetadata {
I'm not sure that "new" and "old" are appropriate prefixes here - they
seem to imply that the arguments are treated differently, but isn't
MergeMetadata commutative?
Personally I'd use something like m0 and m1, but go with your own
thoughts for naming.
https:/
environs/
I know we do this elsewhere, but I've come to dislike
this idiom - it's an unsatisfactory halfway house between
named constants and bools. There's still code that
uses the bool semantics, which means that these names aren't
really helping much because you have to know their values
anyway.
I'd prefer either to use int and iota or to stick with the bool
and lose the named constants.
https:/
environs/
generated from the given tools list.
It's not clear what targetTools is here. It sounds like it's
a list of tools in the target storage, but it doesn't actually
seem like that's the case.
https:/
environs/
metadata is resolved by fetching the tools
// If resolve is Resolve
https:/
environs/
storage.Storage, targetTools coretools.List, resolve ResolveFlag) error
{
s/stor/target/ ?
(given that we refer to "the target storage" above)
https:/
File environs/
Andrew Wilkins (axwalk) wrote : | # |
https:/
File cmd/plugins/
https:/
cmd/plugins/
tools.MergeAndW
tools.ResolveFl
On 2013/10/09 12:08:57, rog wrote:
> I'd prefer to see explicit values for the resolve flag here - using it
as a
> boolean gets around the advantages of using the named constants, I
think.
Done.
https:/
File environs/
https:/
environs/
On 2013/10/09 12:08:57, rog wrote:
> s/metadata/size and hash/ ?
Done.
https:/
environs/
On 2013/10/09 12:08:57, rog wrote:
> if md.Size != 0 {
> continue
> }
> to save an indent level?
Done.
https:/
environs/
oldMetadata []*ToolsMetadata) []*ToolsMetadata {
On 2013/10/09 12:08:57, rog wrote:
> I'm not sure that "new" and "old" are appropriate prefixes here - they
seem to
> imply that the arguments are treated differently, but isn't
MergeMetadata
> commutative?
> Personally I'd use something like m0 and m1, but go with your own
thoughts for
> naming.
It's not commutative, per the last sentence of the doc comment:
"If both entries have information, prefer the entry from newMetadata".
The idea behind that is that if both have size&hash, then the one from
"newMetadata" is considered newer. That's to cater for copying over the
top of existing tools (say, a dev build).
https:/
environs/
On 2013/10/09 12:08:57, rog wrote:
> I know we do this elsewhere, but I've come to dislike
> this idiom - it's an unsatisfactory halfway house between
> named constants and bools. There's still code that
> uses the bool semantics, which means that these names aren't
> really helping much because you have to know their values
> anyway.
> I'd prefer either to use int and iota or to stick with the bool
> and lose the named constants.
Consider a bool as (conceptually) a 1-bit unsigned int; then what's the
difference between that and int with constant named values? The only
difference I see is that the bool is range limited to [0, 1], which is a
very good thing.
I'd be happy to drop the named type. Would that be a reasonable
alternative? i.e.:
const (
DontResolve = false
Resolve = true
)
func MergeAndWriteMe
coretools.List, resolve bool) error {
...
}
then people can use bools if they want, without much remorse, or they
can use the named constant...
Andrew Wilkins (axwalk) wrote : | # |
Please take a look.
Roger Peppe (rogpeppe) wrote : | # |
LGTM with some thoughts below.
For the future, I think that the "resolve" mode is unnecessary and
should go - if we have items in the target bucket that we don't have
hashes for, we should just ignore them. This will actually work fine in
practice, I think, even if we abort a SyncTools half way through,
because when we retry the SyncTools, any items in the target bucket that
we didn't get around to adding to the target bucket simplestreams data
should have metadata in the source bucket's simplestreams data.
In general, I think retro-discovering hash data by trying to find out
what the data currently is is a flawed idea - the hash of some tools
should be generated on first upload and preserved forever after.
https:/
File environs/
https:/
environs/
oldMetadata []*ToolsMetadata) []*ToolsMetadata {
On 2013/10/09 14:28:25, axw wrote:
> On 2013/10/09 12:08:57, rog wrote:
> > I'm not sure that "new" and "old" are appropriate prefixes here -
they seem to
> > imply that the arguments are treated differently, but isn't
MergeMetadata
> > commutative?
> >
> > Personally I'd use something like m0 and m1, but go with your own
thoughts for
> > naming.
> It's not commutative, per the last sentence of the doc comment:
> "If both entries have information, prefer the entry from newMetadata".
> The idea behind that is that if both have size&hash, then the one from
> "newMetadata" is considered newer. That's to cater for copying over
the top of
> existing tools (say, a dev build).
Hmm, that *should* never happen - a version should always correspond
only to one particular tarball. I think if we do get a clash like this,
then we have a problem that I suspect our current logic won't cope with.
Perhaps this function should check for inconsistencies and return an
error if it finds one.
https:/
environs/
On 2013/10/09 14:28:25, axw wrote:
> On 2013/10/09 12:08:57, rog wrote:
> > I know we do this elsewhere, but I've come to dislike
> > this idiom - it's an unsatisfactory halfway house between
> > named constants and bools. There's still code that
> > uses the bool semantics, which means that these names aren't
> > really helping much because you have to know their values
> > anyway.
> >
> > I'd prefer either to use int and iota or to stick with the bool
> > and lose the named constants.
> Consider a bool as (conceptually) a 1-bit unsigned int; then what's
the
> difference between that and int with constant named values? The only
difference
> I see is that the bool is range limited to [0, 1], which is a very
good thing.
The main difference I see is that various pieces of code will use a
boolean directly, or a boolean test and the sense of the code isn't
clear in that case. For example, in another piece of code in this CL,
there's a boolean field called "fetch" which is passed direc...
Andrew Wilkins (axwalk) wrote : | # |
On 2013/10/09 16:29:42, rog wrote:
> LGTM with some thoughts below.
Thanks. I'll change the newMetadata/
I think I'll leave the others to a followup.
> For the future, I think that the "resolve" mode is unnecessary and
should go -
> if we have items in the target bucket that we don't have hashes for,
we should
> just ignore them. This will actually work fine in practice, I think,
even if we
> abort a SyncTools half way through, because when we retry the
SyncTools, any
> items in the target bucket that we didn't get around to adding to the
target
> bucket simplestreams data should have metadata in the source bucket's
> simplestreams data.
> In general, I think retro-discovering hash data by trying to find out
what the
> data currently is is a flawed idea - the hash of some tools should be
generated
> on first upload and preserved forever after.
I'd like to say I agree, however I know the requirements here are a bit
subtle.
I think it'd be good to talk with wallyworld at SFO to nut it out.
https:/
> File environs/
https:/
> environs/
oldMetadata
> []*ToolsMetadata) []*ToolsMetadata {
> On 2013/10/09 14:28:25, axw wrote:
> > On 2013/10/09 12:08:57, rog wrote:
> > > I'm not sure that "new" and "old" are appropriate prefixes here -
they seem
> to
> > > imply that the arguments are treated differently, but isn't
MergeMetadata
> > > commutative?
> > >
> > > Personally I'd use something like m0 and m1, but go with your own
thoughts
> for
> > > naming.
> >
> > It's not commutative, per the last sentence of the doc comment:
> > "If both entries have information, prefer the entry from
newMetadata".
> >
> > The idea behind that is that if both have size&hash, then the one
from
> > "newMetadata" is considered newer. That's to cater for copying over
the top of
> > existing tools (say, a dev build).
> Hmm, that *should* never happen - a version should always correspond
only to one
> particular tarball. I think if we do get a clash like this, then we
have a
> problem that I suspect our current logic won't cope with.
> Perhaps this function should check for inconsistencies and return an
error if it
> finds one.
Yeah, that sounds sensible. The tools won't be copied over if they are
already in the target, so the "newMetadata" entry won't have a
size/hash. So this code is only handling a theoretical case, anyway.
I'll make this change and repropose.
https:/
> environs/
> On 2013/10/09 14:28:25, axw wrote:
> > On 2013/10/09 12:08:57, rog wrote:
> > > I know we do this elsewhere, but I've come to dislike
> > > this idiom - it's an unsatisfactory halfway house between
> > > named constants and bools. There's still code that
> > > uses the bool semantics, which means that these names aren't
> > > really helping much because you have to know th...
Andrew Wilkins (axwalk) wrote : | # |
Please take a look.
Andrew Wilkins (axwalk) wrote : | # |
Please take a look.
Andrew Wilkins (axwalk) wrote : | # |
Please take a look.
Andrew Wilkins (axwalk) wrote : | # |
On 2013/10/10 02:26:45, axw wrote:
> On 2013/10/09 16:29:42, rog wrote:
> > LGTM with some thoughts below.
> Thanks. I'll change the newMetadata/
but I think
> I'll leave the others to a followup.
> > For the future, I think that the "resolve" mode is unnecessary and
should go -
> > if we have items in the target bucket that we don't have hashes for,
we should
> > just ignore them. This will actually work fine in practice, I think,
even if
> we
> > abort a SyncTools half way through, because when we retry the
SyncTools, any
> > items in the target bucket that we didn't get around to adding to
the target
> > bucket simplestreams data should have metadata in the source
bucket's
> > simplestreams data.
> >
> > In general, I think retro-discovering hash data by trying to find
out what the
> > data currently is is a flawed idea - the hash of some tools should
be
> generated
> > on first upload and preserved forever after.
> I'd like to say I agree, however I know the requirements here are a
bit subtle.
> I think it'd be good to talk with wallyworld at SFO to nut it out.
I discussed this with wallyworld on IRC, and he agreed that it should go
in 1.18.
After 1.16, all tools in storage should have corresponding metadata with
size and hash.
> >
https:/
> > File environs/
> >
> >
https:/
> > environs/
> oldMetadata
> > []*ToolsMetadata) []*ToolsMetadata {
> > On 2013/10/09 14:28:25, axw wrote:
> > > On 2013/10/09 12:08:57, rog wrote:
> > > > I'm not sure that "new" and "old" are appropriate prefixes here
- they
> seem
> > to
> > > > imply that the arguments are treated differently, but isn't
MergeMetadata
> > > > commutative?
> > > >
> > > > Personally I'd use something like m0 and m1, but go with your
own thoughts
> > for
> > > > naming.
> > >
> > > It's not commutative, per the last sentence of the doc comment:
> > > "If both entries have information, prefer the entry from
newMetadata".
> > >
> > > The idea behind that is that if both have size&hash, then the one
from
> > > "newMetadata" is considered newer. That's to cater for copying
over the top
> of
> > > existing tools (say, a dev build).
> >
> > Hmm, that *should* never happen - a version should always correspond
only to
> one
> > particular tarball. I think if we do get a clash like this, then we
have a
> > problem that I suspect our current logic won't cope with.
> >
> > Perhaps this function should check for inconsistencies and return an
error if
> it
> > finds one.
> Yeah, that sounds sensible. The tools won't be copied over if they are
already
> in the target, so the "newMetadata" entry won't have a size/hash. So
this code
> is only handling a theoretical case, anyway. I'll make this change and
> repropose.
> >
https:/
> > environs/
> > On 2013/10/09 14:28:25, axw wrote:
> ...
Preview Diff
1 | === modified file 'cmd/jujud/agent_test.go' |
2 | --- cmd/jujud/agent_test.go 2013-10-01 12:51:12 +0000 |
3 | +++ cmd/jujud/agent_test.go 2013-10-16 05:27:02 +0000 |
4 | @@ -192,7 +192,7 @@ |
5 | func (s *agentSuite) primeAgent(c *gc.C, tag, password string) (agent.Config, *coretools.Tools) { |
6 | stor := s.Conn.Environ.Storage() |
7 | agentTools := envtesting.PrimeTools(c, stor, s.DataDir(), version.Current) |
8 | - err := envtools.WriteMetadata(coretools.List{agentTools}, true, stor) |
9 | + err := envtools.MergeAndWriteMetadata(stor, coretools.List{agentTools}) |
10 | c.Assert(err, gc.IsNil) |
11 | tools1, err := agenttools.ChangeAgentTools(s.DataDir(), tag, version.Current) |
12 | c.Assert(err, gc.IsNil) |
13 | |
14 | === modified file 'cmd/plugins/juju-metadata/toolsmetadata.go' |
15 | --- cmd/plugins/juju-metadata/toolsmetadata.go 2013-09-19 02:53:44 +0000 |
16 | +++ cmd/plugins/juju-metadata/toolsmetadata.go 2013-10-16 05:27:02 +0000 |
17 | @@ -12,9 +12,11 @@ |
18 | "launchpad.net/juju-core/cmd" |
19 | "launchpad.net/juju-core/environs/config" |
20 | "launchpad.net/juju-core/environs/filestorage" |
21 | + "launchpad.net/juju-core/environs/storage" |
22 | "launchpad.net/juju-core/environs/sync" |
23 | "launchpad.net/juju-core/environs/tools" |
24 | "launchpad.net/juju-core/provider/ec2/httpstorage" |
25 | + coretools "launchpad.net/juju-core/tools" |
26 | "launchpad.net/juju-core/utils" |
27 | "launchpad.net/juju-core/version" |
28 | ) |
29 | @@ -42,7 +44,6 @@ |
30 | |
31 | func (c *ToolsMetadataCommand) SetFlags(f *gnuflag.FlagSet) { |
32 | c.EnvCommandBase.SetFlags(f) |
33 | - f.BoolVar(&c.fetch, "fetch", true, "fetch tools and compute content size and hash") |
34 | f.StringVar(&c.metadataDir, "d", "", "local directory in which to store metadata") |
35 | } |
36 | |
37 | @@ -73,5 +74,23 @@ |
38 | if err != nil { |
39 | return err |
40 | } |
41 | - return tools.WriteMetadata(toolsList, c.fetch, targetStorage) |
42 | + return mergeAndWriteMetadata(targetStorage, toolsList) |
43 | +} |
44 | + |
45 | +// This is essentially the same as tools.MergeAndWriteMetadata, but also |
46 | +// resolves metadata for existing tools by fetching them and computing |
47 | +// size/sha256 locally. |
48 | +func mergeAndWriteMetadata(stor storage.Storage, toolsList coretools.List) error { |
49 | + existing, err := tools.ReadMetadata(stor) |
50 | + if err != nil { |
51 | + return err |
52 | + } |
53 | + metadata := tools.MetadataFromTools(toolsList) |
54 | + if metadata, err = tools.MergeMetadata(metadata, existing); err != nil { |
55 | + return err |
56 | + } |
57 | + if err = tools.ResolveMetadata(stor, metadata); err != nil { |
58 | + return err |
59 | + } |
60 | + return tools.WriteMetadata(stor, metadata) |
61 | } |
62 | |
63 | === modified file 'cmd/plugins/juju-metadata/toolsmetadata_test.go' |
64 | --- cmd/plugins/juju-metadata/toolsmetadata_test.go 2013-09-30 19:52:11 +0000 |
65 | +++ cmd/plugins/juju-metadata/toolsmetadata_test.go 2013-10-16 05:27:02 +0000 |
66 | @@ -22,10 +22,12 @@ |
67 | ttesting "launchpad.net/juju-core/environs/tools/testing" |
68 | "launchpad.net/juju-core/provider/dummy" |
69 | coretesting "launchpad.net/juju-core/testing" |
70 | + "launchpad.net/juju-core/testing/testbase" |
71 | "launchpad.net/juju-core/version" |
72 | ) |
73 | |
74 | type ToolsMetadataSuite struct { |
75 | + testbase.LoggingSuite |
76 | home *coretesting.FakeHome |
77 | env environs.Environ |
78 | } |
79 | @@ -33,7 +35,13 @@ |
80 | var _ = gc.Suite(&ToolsMetadataSuite{}) |
81 | |
82 | func (s *ToolsMetadataSuite) SetUpTest(c *gc.C) { |
83 | + s.LoggingSuite.SetUpTest(c) |
84 | s.home = coretesting.MakeSampleHome(c) |
85 | + s.AddCleanup(func(*gc.C) { |
86 | + s.home.Restore() |
87 | + dummy.Reset() |
88 | + loggo.ResetLoggers() |
89 | + }) |
90 | env, err := environs.PrepareFromName("erewhemos", configstore.NewMem()) |
91 | c.Assert(err, gc.IsNil) |
92 | s.env = env |
93 | @@ -41,12 +49,6 @@ |
94 | loggo.GetLogger("").SetLogLevel(loggo.INFO) |
95 | } |
96 | |
97 | -func (s *ToolsMetadataSuite) TearDownTest(c *gc.C) { |
98 | - dummy.Reset() |
99 | - loggo.ResetLoggers() |
100 | - s.home.Restore() |
101 | -} |
102 | - |
103 | var currentVersionStrings = []string{ |
104 | // only these ones will make it into the JSON files. |
105 | version.Current.Number.String() + "-quantal-amd64", |
106 | @@ -66,13 +68,13 @@ |
107 | |
108 | func makeExpectedOutputCommon() string { |
109 | expected := `Finding tools\.\.\. |
110 | -.*Fetching tools to generate hash: .*/tools/.*juju-1\.12\.0-precise-amd64\.tgz |
111 | -.*Fetching tools to generate hash: .*/tools/.*juju-1\.12\.0-precise-i386\.tgz |
112 | -.*Fetching tools to generate hash: .*/tools/.*juju-1\.12\.0-raring-amd64\.tgz |
113 | -.*Fetching tools to generate hash: .*/tools/.*juju-1\.12\.0-raring-i386\.tgz |
114 | -.*Fetching tools to generate hash: .*/tools/.*juju-1\.13\.0-precise-amd64\.tgz |
115 | +.*Fetching tools to generate hash: 1\.12\.0-precise-amd64 |
116 | +.*Fetching tools to generate hash: 1\.12\.0-precise-i386 |
117 | +.*Fetching tools to generate hash: 1\.12\.0-raring-amd64 |
118 | +.*Fetching tools to generate hash: 1\.12\.0-raring-i386 |
119 | +.*Fetching tools to generate hash: 1\.13\.0-precise-amd64 |
120 | ` |
121 | - f := ".*Fetching tools to generate hash: .*/tools/.*juju-%s\\.tgz\n" |
122 | + f := ".*Fetching tools to generate hash: %s\n" |
123 | for _, v := range currentVersionStrings { |
124 | expected += fmt.Sprintf(f, regexp.QuoteMeta(v)) |
125 | } |
126 | @@ -145,8 +147,8 @@ |
127 | output := ctx.Stdout.(*bytes.Buffer).String() |
128 | expectedOutput := fmt.Sprintf(` |
129 | Finding tools\.\.\. |
130 | -.*Fetching tools to generate hash: .*/tools/releases/juju-%s\.tgz |
131 | -.*Fetching tools to generate hash: .*/tools/releases/juju-%s\.tgz |
132 | +.*Fetching tools to generate hash: %s |
133 | +.*Fetching tools to generate hash: %s |
134 | .*Writing tools/streams/v1/index\.json |
135 | .*Writing tools/streams/v1/com\.ubuntu\.juju:released:tools\.json |
136 | `[1:], regexp.QuoteMeta(versionStrings[0]), regexp.QuoteMeta(versionStrings[1])) |
137 | |
138 | === modified file 'environs/sync/sync.go' |
139 | --- environs/sync/sync.go 2013-10-10 11:40:54 +0000 |
140 | +++ environs/sync/sync.go 2013-10-16 05:27:02 +0000 |
141 | @@ -120,7 +120,7 @@ |
142 | logger.Infof("generating tools metadata") |
143 | if !syncContext.DryRun { |
144 | targetTools = append(targetTools, missing...) |
145 | - err = envtools.WriteMetadata(targetTools, true, targetStorage) |
146 | + err = envtools.MergeAndWriteMetadata(targetStorage, targetTools) |
147 | if err != nil { |
148 | return err |
149 | } |
150 | |
151 | === modified file 'environs/testing/tools.go' |
152 | --- environs/testing/tools.go 2013-10-03 13:19:40 +0000 |
153 | +++ environs/testing/tools.go 2013-10-16 05:27:02 +0000 |
154 | @@ -112,16 +112,25 @@ |
155 | |
156 | // UploadFakeToolsVersions puts fake tools in the supplied storage for the supplied versions. |
157 | func UploadFakeToolsVersions(stor storage.Storage, versions ...version.Binary) ([]*coretools.Tools, error) { |
158 | + // Leave existing tools alone. |
159 | + existingTools := make(map[version.Binary]*coretools.Tools) |
160 | + existing, _ := envtools.ReadList(stor, 1, -1) |
161 | + for _, tools := range existing { |
162 | + existingTools[tools.Version] = tools |
163 | + } |
164 | var agentTools coretools.List = make(coretools.List, len(versions)) |
165 | for i, version := range versions { |
166 | - t, err := uploadFakeToolsVersion(stor, version) |
167 | - if err != nil { |
168 | - return nil, err |
169 | + if tools, ok := existingTools[version]; ok { |
170 | + agentTools[i] = tools |
171 | + } else { |
172 | + t, err := uploadFakeToolsVersion(stor, version) |
173 | + if err != nil { |
174 | + return nil, err |
175 | + } |
176 | + agentTools[i] = t |
177 | } |
178 | - agentTools[i] = t |
179 | } |
180 | - err := envtools.WriteMetadata(agentTools, true, stor) |
181 | - if err != nil { |
182 | + if err := envtools.MergeAndWriteMetadata(stor, agentTools); err != nil { |
183 | return nil, err |
184 | } |
185 | return agentTools, nil |
186 | @@ -144,7 +153,7 @@ |
187 | } |
188 | agentTools[i] = t |
189 | } |
190 | - err := envtools.WriteMetadata(agentTools, true, stor) |
191 | + err := envtools.MergeAndWriteMetadata(stor, agentTools) |
192 | if err != nil { |
193 | panic(err) |
194 | } |
195 | @@ -182,6 +191,7 @@ |
196 | |
197 | // RemoveFakeTools deletes the fake tools from the supplied storage. |
198 | func RemoveFakeTools(c *gc.C, stor storage.Storage) { |
199 | + c.Logf("removing fake tools") |
200 | toolsVersion := version.Current |
201 | name := envtools.StorageName(toolsVersion) |
202 | err := stor.Remove(name) |
203 | |
204 | === modified file 'environs/tools/simplestreams.go' |
205 | --- environs/tools/simplestreams.go 2013-10-10 11:40:54 +0000 |
206 | +++ environs/tools/simplestreams.go 2013-10-16 05:27:02 +0000 |
207 | @@ -12,7 +12,6 @@ |
208 | "fmt" |
209 | "hash" |
210 | "io" |
211 | - "net/http" |
212 | "strings" |
213 | "time" |
214 | |
215 | @@ -138,7 +137,7 @@ |
216 | Arch string `json:"arch"` |
217 | Size int64 `json:"size"` |
218 | Path string `json:"path"` |
219 | - FullPath string `json:"-,omitempty"` |
220 | + FullPath string `json:"-"` |
221 | FileType string `json:"ftype"` |
222 | SHA256 string `json:"sha256"` |
223 | } |
224 | @@ -147,6 +146,16 @@ |
225 | return fmt.Sprintf("%+v", *t) |
226 | } |
227 | |
228 | +// binary returns the tools metadata's binary version, |
229 | +// which may be used for map lookup. |
230 | +func (t *ToolsMetadata) binary() version.Binary { |
231 | + return version.Binary{ |
232 | + Number: version.MustParse(t.Version), |
233 | + Series: t.Release, |
234 | + Arch: t.Arch, |
235 | + } |
236 | +} |
237 | + |
238 | func (t *ToolsMetadata) productId() (string, error) { |
239 | seriesVersion, err := simplestreams.SeriesVersion(t.Release) |
240 | if err != nil { |
241 | @@ -199,10 +208,10 @@ |
242 | func appendMatchingTools(source simplestreams.DataSource, matchingTools []interface{}, |
243 | tools map[string]interface{}, cons simplestreams.LookupConstraint) []interface{} { |
244 | |
245 | - toolsMap := make(map[string]*ToolsMetadata, len(matchingTools)) |
246 | + toolsMap := make(map[version.Binary]*ToolsMetadata, len(matchingTools)) |
247 | for _, val := range matchingTools { |
248 | tm := val.(*ToolsMetadata) |
249 | - toolsMap[fmt.Sprintf("%s-%s-%s", tm.Release, tm.Version, tm.Arch)] = tm |
250 | + toolsMap[tm.binary()] = tm |
251 | } |
252 | for _, val := range tools { |
253 | tm := val.(*ToolsMetadata) |
254 | @@ -227,7 +236,7 @@ |
255 | } |
256 | } |
257 | } |
258 | - if _, ok := toolsMap[fmt.Sprintf("%s-%s-%s", tm.Release, tm.Version, tm.Arch)]; !ok { |
259 | + if _, ok := toolsMap[tm.binary()]; !ok { |
260 | tm.FullPath, _ = source.URL(tm.Path) |
261 | matchingTools = append(matchingTools, tm) |
262 | } |
263 | @@ -240,101 +249,142 @@ |
264 | Data []byte |
265 | } |
266 | |
267 | -func WriteMetadata(toolsList coretools.List, fetch bool, metadataStore storage.Storage) error { |
268 | - // Read any existing metadata so we can merge the new tools metadata with what's there. |
269 | - // The metadata from toolsList is already present, the existing data is overwritten. |
270 | - dataSource := storage.NewStorageSimpleStreamsDataSource(metadataStore, "tools") |
271 | - toolsConstraint, err := makeToolsConstraint(simplestreams.CloudSpec{}, -1, -1, coretools.Filter{}) |
272 | - if err != nil { |
273 | - return err |
274 | - } |
275 | - existingMetadata, err := Fetch([]simplestreams.DataSource{dataSource}, simplestreams.DefaultIndexPath, toolsConstraint, false) |
276 | - if err != nil && !errors.IsNotFoundError(err) { |
277 | - return err |
278 | - } |
279 | - newToolsVersions := make(map[string]bool) |
280 | - for _, tool := range toolsList { |
281 | - newToolsVersions[tool.Version.String()] = true |
282 | - } |
283 | - // Merge in existing records. |
284 | - for _, toolsMetadata := range existingMetadata { |
285 | - vers := version.Binary{version.MustParse(toolsMetadata.Version), toolsMetadata.Release, toolsMetadata.Arch} |
286 | - if _, ok := newToolsVersions[vers.String()]; ok { |
287 | - continue |
288 | - } |
289 | - tool := &coretools.Tools{ |
290 | - Version: vers, |
291 | - SHA256: toolsMetadata.SHA256, |
292 | - Size: toolsMetadata.Size, |
293 | - } |
294 | - toolsList = append(toolsList, tool) |
295 | - } |
296 | - metadataInfo, err := generateMetadata(toolsList, fetch) |
297 | - if err != nil { |
298 | - return err |
299 | - } |
300 | - for _, md := range metadataInfo { |
301 | - logger.Infof("Writing %s", "tools/"+md.Path) |
302 | - err = metadataStore.Put("tools/"+md.Path, bytes.NewReader(md.Data), int64(len(md.Data))) |
303 | - if err != nil { |
304 | - return err |
305 | - } |
306 | - } |
307 | - return nil |
308 | -} |
309 | - |
310 | -func generateMetadata(toolsList coretools.List, fetch bool) ([]MetadataFile, error) { |
311 | +// MetadataFromTools returns a tools metadata list derived from the |
312 | +// given tools list. The size and sha256 will not be computed if |
313 | +// missing. |
314 | +func MetadataFromTools(toolsList coretools.List) []*ToolsMetadata { |
315 | metadata := make([]*ToolsMetadata, len(toolsList)) |
316 | for i, t := range toolsList { |
317 | - var size int64 |
318 | - var sha256hex string |
319 | - var err error |
320 | - if fetch && t.Size == 0 { |
321 | - logger.Infof("Fetching tools to generate hash: %v", t.URL) |
322 | - var sha256hash hash.Hash |
323 | - size, sha256hash, err = fetchToolsHash(t.URL) |
324 | - if err != nil { |
325 | - return nil, err |
326 | - } |
327 | - sha256hex = fmt.Sprintf("%x", sha256hash.Sum(nil)) |
328 | - } else { |
329 | - size = t.Size |
330 | - sha256hex = t.SHA256 |
331 | - } |
332 | - |
333 | path := fmt.Sprintf("releases/juju-%s-%s-%s.tgz", t.Version.Number, t.Version.Series, t.Version.Arch) |
334 | metadata[i] = &ToolsMetadata{ |
335 | Release: t.Version.Series, |
336 | Version: t.Version.Number.String(), |
337 | Arch: t.Version.Arch, |
338 | + FullPath: t.URL, |
339 | Path: path, |
340 | FileType: "tar.gz", |
341 | - Size: size, |
342 | - SHA256: sha256hex, |
343 | - } |
344 | - } |
345 | - |
346 | + Size: t.Size, |
347 | + SHA256: t.SHA256, |
348 | + } |
349 | + } |
350 | + return metadata |
351 | +} |
352 | + |
353 | +// ResolveMetadata resolves incomplete metadata |
354 | +// by fetching the tools from storage and computing |
355 | +// the size and hash locally. |
356 | +func ResolveMetadata(stor storage.StorageReader, metadata []*ToolsMetadata) error { |
357 | + for _, md := range metadata { |
358 | + if md.Size != 0 { |
359 | + continue |
360 | + } |
361 | + binary := md.binary() |
362 | + logger.Infof("Fetching tools to generate hash: %v", binary) |
363 | + var sha256hash hash.Hash |
364 | + size, sha256hash, err := fetchToolsHash(stor, binary) |
365 | + if err != nil { |
366 | + return err |
367 | + } |
368 | + md.Size = size |
369 | + md.SHA256 = fmt.Sprintf("%x", sha256hash.Sum(nil)) |
370 | + } |
371 | + return nil |
372 | +} |
373 | + |
374 | +// MergeMetadata merges the given tools metadata. |
375 | +// If metadata for the same tools version exists in both lists, |
376 | +// an entry with non-empty size/SHA256 takes precedence; if |
377 | +// the two entries have different sizes/hashes, then an error is |
378 | +// returned. |
379 | +func MergeMetadata(tmlist1, tmlist2 []*ToolsMetadata) ([]*ToolsMetadata, error) { |
380 | + merged := make(map[version.Binary]*ToolsMetadata) |
381 | + for _, tm := range tmlist1 { |
382 | + merged[tm.binary()] = tm |
383 | + } |
384 | + for _, tm := range tmlist2 { |
385 | + binary := tm.binary() |
386 | + if existing, ok := merged[binary]; ok { |
387 | + if tm.Size != 0 { |
388 | + if existing.Size == 0 { |
389 | + merged[binary] = tm |
390 | + } else if existing.Size != tm.Size || existing.SHA256 != tm.SHA256 { |
391 | + return nil, fmt.Errorf( |
392 | + "metadata mismatch for %s: sizes=(%v,%v) sha256=(%v,%v)", |
393 | + binary.String(), |
394 | + existing.Size, tm.Size, |
395 | + existing.SHA256, tm.SHA256, |
396 | + ) |
397 | + } |
398 | + } |
399 | + } else { |
400 | + merged[binary] = tm |
401 | + } |
402 | + } |
403 | + list := make([]*ToolsMetadata, 0, len(merged)) |
404 | + for _, metadata := range merged { |
405 | + list = append(list, metadata) |
406 | + } |
407 | + return list, nil |
408 | +} |
409 | + |
410 | +// ReadMetadata returns the tools metadata from the given storage. |
411 | +func ReadMetadata(store storage.StorageReader) ([]*ToolsMetadata, error) { |
412 | + dataSource := storage.NewStorageSimpleStreamsDataSource(store, "tools") |
413 | + toolsConstraint, err := makeToolsConstraint(simplestreams.CloudSpec{}, -1, -1, coretools.Filter{}) |
414 | + if err != nil { |
415 | + return nil, err |
416 | + } |
417 | + metadata, err := Fetch([]simplestreams.DataSource{dataSource}, simplestreams.DefaultIndexPath, toolsConstraint, false) |
418 | + if err != nil && !errors.IsNotFoundError(err) { |
419 | + return nil, err |
420 | + } |
421 | + return metadata, nil |
422 | +} |
423 | + |
424 | +// WriteMetadata writes the given tools metadata to the given storage. |
425 | +func WriteMetadata(stor storage.Storage, metadata []*ToolsMetadata) error { |
426 | index, products, err := MarshalToolsMetadataJSON(metadata, time.Now()) |
427 | if err != nil { |
428 | - return nil, err |
429 | + return err |
430 | } |
431 | - objects := []MetadataFile{ |
432 | + metadataInfo := []MetadataFile{ |
433 | {simplestreams.UnsignedIndex, index}, |
434 | {ProductMetadataPath, products}, |
435 | } |
436 | - return objects, nil |
437 | -} |
438 | - |
439 | -// fetchToolsHash fetches the file at the specified URL, |
440 | -// and calculates its size in bytes and computes a SHA256 |
441 | -// hash of its contents. |
442 | -func fetchToolsHash(url string) (size int64, sha256hash hash.Hash, err error) { |
443 | - resp, err := http.Get(url) |
444 | + for _, md := range metadataInfo { |
445 | + logger.Infof("Writing %s", "tools/"+md.Path) |
446 | + err = stor.Put("tools/"+md.Path, bytes.NewReader(md.Data), int64(len(md.Data))) |
447 | + if err != nil { |
448 | + return err |
449 | + } |
450 | + } |
451 | + return nil |
452 | +} |
453 | + |
454 | +// MergeAndWriteMetadata reads the existing metadata from storage (if any), |
455 | +// and merges it with metadata generated from the given tools list. The |
456 | +// resulting metadata is written to storage. |
457 | +func MergeAndWriteMetadata(stor storage.Storage, tools coretools.List) error { |
458 | + existing, err := ReadMetadata(stor) |
459 | + if err != nil { |
460 | + return err |
461 | + } |
462 | + metadata := MetadataFromTools(tools) |
463 | + if metadata, err = MergeMetadata(metadata, existing); err != nil { |
464 | + return err |
465 | + } |
466 | + return WriteMetadata(stor, metadata) |
467 | +} |
468 | + |
469 | +// fetchToolsHash fetches the tools from storage and calculates |
470 | +// its size in bytes and computes a SHA256 hash of its contents. |
471 | +func fetchToolsHash(stor storage.StorageReader, ver version.Binary) (size int64, sha256hash hash.Hash, err error) { |
472 | + r, err := storage.Get(stor, StorageName(ver)) |
473 | if err != nil { |
474 | return 0, nil, err |
475 | } |
476 | + defer r.Close() |
477 | sha256hash = sha256.New() |
478 | - size, err = io.Copy(sha256hash, resp.Body) |
479 | - resp.Body.Close() |
480 | + size, err = io.Copy(sha256hash, r) |
481 | return size, sha256hash, err |
482 | } |
483 | |
484 | === modified file 'environs/tools/simplestreams_test.go' |
485 | --- environs/tools/simplestreams_test.go 2013-10-10 11:40:54 +0000 |
486 | +++ environs/tools/simplestreams_test.go 2013-10-16 05:27:02 +0000 |
487 | @@ -7,8 +7,8 @@ |
488 | "bytes" |
489 | "flag" |
490 | "fmt" |
491 | + "io" |
492 | "net/http" |
493 | - "path/filepath" |
494 | "reflect" |
495 | "strings" |
496 | "testing" |
497 | @@ -20,8 +20,10 @@ |
498 | "launchpad.net/juju-core/environs/jujutest" |
499 | "launchpad.net/juju-core/environs/simplestreams" |
500 | sstesting "launchpad.net/juju-core/environs/simplestreams/testing" |
501 | + "launchpad.net/juju-core/environs/storage" |
502 | "launchpad.net/juju-core/environs/tools" |
503 | ttesting "launchpad.net/juju-core/environs/tools/testing" |
504 | + "launchpad.net/juju-core/testing/testbase" |
505 | coretools "launchpad.net/juju-core/tools" |
506 | "launchpad.net/juju-core/version" |
507 | ) |
508 | @@ -280,14 +282,9 @@ |
509 | c.Assert(toolsMetadata[0], gc.DeepEquals, expectedMetadata) |
510 | } |
511 | |
512 | -func assertMetadataMatches(c *gc.C, toolList coretools.List, metadata []*tools.ToolsMetadata) { |
513 | +func assertMetadataMatches(c *gc.C, storageDir string, toolList coretools.List, metadata []*tools.ToolsMetadata) { |
514 | var expectedMetadata []*tools.ToolsMetadata = make([]*tools.ToolsMetadata, len(toolList)) |
515 | for i, tool := range toolList { |
516 | - if tool.URL != "" { |
517 | - size, sha256 := ttesting.SHA256sum(c, tool.URL) |
518 | - tool.SHA256 = sha256 |
519 | - tool.Size = size |
520 | - } |
521 | expectedMetadata[i] = &tools.ToolsMetadata{ |
522 | Release: tool.Version.Series, |
523 | Version: tool.Version.Number.String(), |
524 | @@ -316,10 +313,10 @@ |
525 | dir := c.MkDir() |
526 | writer, err := filestorage.NewFileStorageWriter(dir, filestorage.UseDefaultTmpDir) |
527 | c.Assert(err, gc.IsNil) |
528 | - err = tools.WriteMetadata(toolsList, false, writer) |
529 | + err = tools.MergeAndWriteMetadata(writer, toolsList) |
530 | c.Assert(err, gc.IsNil) |
531 | metadata := ttesting.ParseMetadata(c, dir) |
532 | - assertMetadataMatches(c, toolsList, metadata) |
533 | + assertMetadataMatches(c, dir, toolsList, metadata) |
534 | } |
535 | |
536 | func (s *simplestreamsSuite) TestWriteMetadata(c *gc.C) { |
537 | @@ -338,15 +335,16 @@ |
538 | SHA256: "abcd", |
539 | }, { |
540 | Version: version.MustParseBinary("2.0.1-raring-amd64"), |
541 | - URL: "file://" + filepath.Join(dir, "tools/releases/juju-2.0.1-raring-amd64.tgz"), |
542 | + // The URL is not used for generating metadata. |
543 | + URL: "bogus://", |
544 | }, |
545 | } |
546 | writer, err := filestorage.NewFileStorageWriter(dir, filestorage.UseDefaultTmpDir) |
547 | c.Assert(err, gc.IsNil) |
548 | - err = tools.WriteMetadata(toolsList, true, writer) |
549 | + err = tools.MergeAndWriteMetadata(writer, toolsList) |
550 | c.Assert(err, gc.IsNil) |
551 | metadata := ttesting.ParseMetadata(c, dir) |
552 | - assertMetadataMatches(c, toolsList, metadata) |
553 | + assertMetadataMatches(c, dir, toolsList, metadata) |
554 | } |
555 | |
556 | func (s *simplestreamsSuite) TestWriteMetadataMergeWithExisting(c *gc.C) { |
557 | @@ -364,7 +362,7 @@ |
558 | } |
559 | writer, err := filestorage.NewFileStorageWriter(dir, filestorage.UseDefaultTmpDir) |
560 | c.Assert(err, gc.IsNil) |
561 | - err = tools.WriteMetadata(existingToolsList, true, writer) |
562 | + err = tools.MergeAndWriteMetadata(writer, existingToolsList) |
563 | c.Assert(err, gc.IsNil) |
564 | newToolsList := coretools.List{ |
565 | existingToolsList[0], |
566 | @@ -374,11 +372,11 @@ |
567 | SHA256: "def", |
568 | }, |
569 | } |
570 | - err = tools.WriteMetadata(newToolsList, true, writer) |
571 | + err = tools.MergeAndWriteMetadata(writer, newToolsList) |
572 | c.Assert(err, gc.IsNil) |
573 | requiredToolsList := append(existingToolsList, newToolsList[1]) |
574 | metadata := ttesting.ParseMetadata(c, dir) |
575 | - assertMetadataMatches(c, requiredToolsList, metadata) |
576 | + assertMetadataMatches(c, dir, requiredToolsList, metadata) |
577 | } |
578 | |
579 | type productSpecSuite struct{} |
580 | @@ -480,6 +478,219 @@ |
581 | c.Assert(item.(*tools.ToolsMetadata).Size, gc.Equals, int64(9223372036854775807)) |
582 | } |
583 | |
584 | +type metadataHelperSuite struct { |
585 | + testbase.LoggingSuite |
586 | +} |
587 | + |
588 | +var _ = gc.Suite(&metadataHelperSuite{}) |
589 | + |
590 | +func (*metadataHelperSuite) TestMetadataFromTools(c *gc.C) { |
591 | + metadata := tools.MetadataFromTools(nil) |
592 | + c.Assert(metadata, gc.HasLen, 0) |
593 | + |
594 | + toolsList := coretools.List{{ |
595 | + Version: version.MustParseBinary("1.2.3-precise-amd64"), |
596 | + Size: 123, |
597 | + SHA256: "abc", |
598 | + }, { |
599 | + Version: version.MustParseBinary("2.0.1-raring-amd64"), |
600 | + URL: "file:///tmp/releases/juju-2.0.1-raring-amd64.tgz", |
601 | + Size: 456, |
602 | + SHA256: "xyz", |
603 | + }} |
604 | + metadata = tools.MetadataFromTools(toolsList) |
605 | + c.Assert(metadata, gc.HasLen, len(toolsList)) |
606 | + for i, t := range toolsList { |
607 | + md := metadata[i] |
608 | + c.Assert(md.Release, gc.Equals, t.Version.Series) |
609 | + c.Assert(md.Version, gc.Equals, t.Version.Number.String()) |
610 | + c.Assert(md.Arch, gc.Equals, t.Version.Arch) |
611 | + c.Assert(md.FullPath, gc.Equals, t.URL) |
612 | + c.Assert(md.Path, gc.Equals, tools.StorageName(t.Version)[len("tools/"):]) |
613 | + c.Assert(md.FileType, gc.Equals, "tar.gz") |
614 | + c.Assert(md.Size, gc.Equals, t.Size) |
615 | + c.Assert(md.SHA256, gc.Equals, t.SHA256) |
616 | + } |
617 | +} |
618 | + |
619 | +type countingStorage struct { |
620 | + storage.StorageReader |
621 | + counter int |
622 | +} |
623 | + |
624 | +func (c *countingStorage) Get(name string) (io.ReadCloser, error) { |
625 | + c.counter++ |
626 | + return c.StorageReader.Get(name) |
627 | +} |
628 | + |
629 | +func (*metadataHelperSuite) TestResolveMetadata(c *gc.C) { |
630 | + var versionStrings = []string{"1.2.3-precise-amd64"} |
631 | + dir := c.MkDir() |
632 | + ttesting.MakeTools(c, dir, "releases", versionStrings) |
633 | + toolsList := coretools.List{{ |
634 | + Version: version.MustParseBinary(versionStrings[0]), |
635 | + Size: 123, |
636 | + SHA256: "abc", |
637 | + }} |
638 | + |
639 | + stor, err := filestorage.NewFileStorageReader(dir) |
640 | + c.Assert(err, gc.IsNil) |
641 | + err = tools.ResolveMetadata(stor, nil) |
642 | + c.Assert(err, gc.IsNil) |
643 | + |
644 | + // We already have size/sha256, so ensure that storage isn't consulted. |
645 | + countingStorage := &countingStorage{StorageReader: stor} |
646 | + metadata := tools.MetadataFromTools(toolsList) |
647 | + err = tools.ResolveMetadata(countingStorage, metadata) |
648 | + c.Assert(err, gc.IsNil) |
649 | + c.Assert(countingStorage.counter, gc.Equals, 0) |
650 | + |
651 | + // Now clear size/sha256, and check that it is called, and |
652 | + // the size/sha256 sum are updated. |
653 | + metadata[0].Size = 0 |
654 | + metadata[0].SHA256 = "" |
655 | + err = tools.ResolveMetadata(countingStorage, metadata) |
656 | + c.Assert(err, gc.IsNil) |
657 | + c.Assert(countingStorage.counter, gc.Equals, 1) |
658 | + c.Assert(metadata[0].Size, gc.Not(gc.Equals), 0) |
659 | + c.Assert(metadata[0].SHA256, gc.Not(gc.Equals), "") |
660 | +} |
661 | + |
662 | +func (*metadataHelperSuite) TestMergeMetadata(c *gc.C) { |
663 | + md1 := &tools.ToolsMetadata{ |
664 | + Release: "precise", |
665 | + Version: "1.2.3", |
666 | + Arch: "amd64", |
667 | + Path: "path1", |
668 | + } |
669 | + md2 := &tools.ToolsMetadata{ |
670 | + Release: "precise", |
671 | + Version: "1.2.3", |
672 | + Arch: "amd64", |
673 | + Path: "path2", |
674 | + } |
675 | + md3 := &tools.ToolsMetadata{ |
676 | + Release: "raring", |
677 | + Version: "1.2.3", |
678 | + Arch: "amd64", |
679 | + Path: "path3", |
680 | + } |
681 | + |
682 | + withSize := func(md *tools.ToolsMetadata, size int64) *tools.ToolsMetadata { |
683 | + clone := *md |
684 | + clone.Size = size |
685 | + return &clone |
686 | + } |
687 | + withSHA256 := func(md *tools.ToolsMetadata, sha256 string) *tools.ToolsMetadata { |
688 | + clone := *md |
689 | + clone.SHA256 = sha256 |
690 | + return &clone |
691 | + } |
692 | + |
693 | + type mdlist []*tools.ToolsMetadata |
694 | + type test struct { |
695 | + name string |
696 | + lhs, rhs, merged []*tools.ToolsMetadata |
697 | + err string |
698 | + } |
699 | + tests := []test{{ |
700 | + name: "non-empty lhs, empty rhs", |
701 | + lhs: mdlist{md1}, |
702 | + rhs: nil, |
703 | + merged: mdlist{md1}, |
704 | + }, { |
705 | + name: "empty lhs, non-empty rhs", |
706 | + lhs: nil, |
707 | + rhs: mdlist{md2}, |
708 | + merged: mdlist{md2}, |
709 | + }, { |
710 | + name: "identical lhs, rhs", |
711 | + lhs: mdlist{md1}, |
712 | + rhs: mdlist{md1}, |
713 | + merged: mdlist{md1}, |
714 | + }, { |
715 | + name: "same tools in lhs and rhs, neither have size: prefer lhs", |
716 | + lhs: mdlist{md1}, |
717 | + rhs: mdlist{md2}, |
718 | + merged: mdlist{md1}, |
719 | + }, { |
720 | + name: "same tools in lhs and rhs, only lhs has a size: prefer lhs", |
721 | + lhs: mdlist{withSize(md1, 123)}, |
722 | + rhs: mdlist{md2}, |
723 | + merged: mdlist{withSize(md1, 123)}, |
724 | + }, { |
725 | + name: "same tools in lhs and rhs, only rhs has a size: prefer rhs", |
726 | + lhs: mdlist{md1}, |
727 | + rhs: mdlist{withSize(md2, 123)}, |
728 | + merged: mdlist{withSize(md2, 123)}, |
729 | + }, { |
730 | + name: "same tools in lhs and rhs, both have the same size: prefer lhs", |
731 | + lhs: mdlist{withSize(md1, 123)}, |
732 | + rhs: mdlist{withSize(md2, 123)}, |
733 | + merged: mdlist{withSize(md1, 123)}, |
734 | + }, { |
735 | + name: "same tools in lhs and rhs, both have different sizes: error", |
736 | + lhs: mdlist{withSize(md1, 123)}, |
737 | + rhs: mdlist{withSize(md2, 456)}, |
738 | + err: "metadata mismatch for 1\\.2\\.3-precise-amd64: sizes=\\(123,456\\) sha256=\\(,\\)", |
739 | + }, { |
740 | + name: "same tools in lhs and rhs, both have same size but different sha256: error", |
741 | + lhs: mdlist{withSHA256(withSize(md1, 123), "a")}, |
742 | + rhs: mdlist{withSHA256(withSize(md2, 123), "b")}, |
743 | + err: "metadata mismatch for 1\\.2\\.3-precise-amd64: sizes=\\(123,123\\) sha256=\\(a,b\\)", |
744 | + }, { |
745 | + name: "lhs is a proper superset of rhs: union of lhs and rhs", |
746 | + lhs: mdlist{md1, md3}, |
747 | + rhs: mdlist{md1}, |
748 | + merged: mdlist{md1, md3}, |
749 | + }, { |
750 | + name: "rhs is a proper superset of lhs: union of lhs and rhs", |
751 | + lhs: mdlist{md1}, |
752 | + rhs: mdlist{md1, md3}, |
753 | + merged: mdlist{md1, md3}, |
754 | + }} |
755 | + for i, test := range tests { |
756 | + c.Logf("test %d: %s", i, test.name) |
757 | + merged, err := tools.MergeMetadata(test.lhs, test.rhs) |
758 | + if test.err == "" { |
759 | + c.Assert(err, gc.IsNil) |
760 | + c.Assert(merged, gc.DeepEquals, test.merged) |
761 | + } else { |
762 | + c.Assert(err, gc.ErrorMatches, test.err) |
763 | + c.Assert(merged, gc.IsNil) |
764 | + } |
765 | + } |
766 | +} |
767 | + |
768 | +func (*metadataHelperSuite) TestReadWriteMetadata(c *gc.C) { |
769 | + metadata := []*tools.ToolsMetadata{{ |
770 | + Release: "precise", |
771 | + Version: "1.2.3", |
772 | + Arch: "amd64", |
773 | + Path: "path1", |
774 | + }, { |
775 | + Release: "raring", |
776 | + Version: "1.2.3", |
777 | + Arch: "amd64", |
778 | + Path: "path2", |
779 | + }} |
780 | + |
781 | + stor, err := filestorage.NewFileStorageWriter(c.MkDir(), filestorage.UseDefaultTmpDir) |
782 | + c.Assert(err, gc.IsNil) |
783 | + out, err := tools.ReadMetadata(stor) |
784 | + c.Assert(out, gc.HasLen, 0) |
785 | + c.Assert(err, gc.IsNil) // non-existence is not an error |
786 | + err = tools.WriteMetadata(stor, metadata) |
787 | + c.Assert(err, gc.IsNil) |
788 | + out, err = tools.ReadMetadata(stor) |
789 | + for _, md := range out { |
790 | + // FullPath is set by ReadMetadata. |
791 | + c.Assert(md.FullPath, gc.Not(gc.Equals), "") |
792 | + md.FullPath = "" |
793 | + } |
794 | + c.Assert(out, gc.DeepEquals, metadata) |
795 | +} |
796 | + |
797 | type signedSuite struct { |
798 | origKey string |
799 | } |
800 | |
801 | === modified file 'environs/tools/tools.go' |
802 | --- environs/tools/tools.go 2013-09-26 23:07:24 +0000 |
803 | +++ environs/tools/tools.go 2013-10-16 05:27:02 +0000 |
804 | @@ -121,13 +121,8 @@ |
805 | } |
806 | list = make(coretools.List, len(toolsMetadata)) |
807 | for i, metadata := range toolsMetadata { |
808 | - binary := version.Binary{ |
809 | - Number: version.MustParse(metadata.Version), |
810 | - Arch: metadata.Arch, |
811 | - Series: metadata.Release, |
812 | - } |
813 | list[i] = &coretools.Tools{ |
814 | - Version: binary, |
815 | + Version: metadata.binary(), |
816 | URL: metadata.FullPath, |
817 | Size: metadata.Size, |
818 | SHA256: metadata.SHA256, |
819 | |
820 | === modified file 'testing/targz.go' |
821 | --- testing/targz.go 2013-09-25 00:18:35 +0000 |
822 | +++ testing/targz.go 2013-10-16 05:27:02 +0000 |
823 | @@ -11,7 +11,6 @@ |
824 | "fmt" |
825 | "io" |
826 | "os" |
827 | - "time" |
828 | ) |
829 | |
830 | // TarFile represents a file to be archived. |
831 | @@ -33,17 +32,16 @@ |
832 | if ftype == 0 { |
833 | panic(fmt.Errorf("unexpected mode %v", mode)) |
834 | } |
835 | + // NOTE: Do not set attributes (e.g. times) dynamically, as various |
836 | + // tests expect the contents of fake tools archives to be unchanging. |
837 | return &TarFile{ |
838 | Header: tar.Header{ |
839 | - Typeflag: ftype, |
840 | - Name: name, |
841 | - Size: int64(len(contents)), |
842 | - Mode: int64(mode & 0777), |
843 | - ModTime: time.Now(), |
844 | - AccessTime: time.Now(), |
845 | - ChangeTime: time.Now(), |
846 | - Uname: "ubuntu", |
847 | - Gname: "ubuntu", |
848 | + Typeflag: ftype, |
849 | + Name: name, |
850 | + Size: int64(len(contents)), |
851 | + Mode: int64(mode & 0777), |
852 | + Uname: "ubuntu", |
853 | + Gname: "ubuntu", |
854 | }, |
855 | Contents: contents, |
856 | } |
857 | |
858 | === modified file 'worker/upgrader/upgrader_test.go' |
859 | --- worker/upgrader/upgrader_test.go 2013-10-02 16:28:16 +0000 |
860 | +++ worker/upgrader/upgrader_test.go 2013-10-16 05:27:02 +0000 |
861 | @@ -83,7 +83,7 @@ |
862 | c.Assert(err, gc.IsNil) |
863 | stor := s.Conn.Environ.Storage() |
864 | agentTools := envtesting.PrimeTools(c, stor, s.DataDir(), vers) |
865 | - err = envtools.WriteMetadata(coretools.List{agentTools}, true, stor) |
866 | + err = envtools.MergeAndWriteMetadata(stor, coretools.List{agentTools}) |
867 | _, err = s.machine.AgentTools() |
868 | c.Assert(err, jc.Satisfies, errors.IsNotFoundError) |
869 | |
870 | @@ -119,7 +119,7 @@ |
871 | oldTools := envtesting.PrimeTools(c, stor, s.DataDir(), version.MustParseBinary("5.4.3-precise-amd64")) |
872 | newTools := envtesting.AssertUploadFakeToolsVersions( |
873 | c, stor, version.MustParseBinary("5.4.5-precise-amd64"))[0] |
874 | - err := envtools.WriteMetadata(coretools.List{oldTools, newTools}, true, stor) |
875 | + err := envtools.MergeAndWriteMetadata(stor, coretools.List{oldTools, newTools}) |
876 | c.Assert(err, gc.IsNil) |
877 | err = statetesting.SetAgentVersion(s.State, newTools.Version.Number) |
878 | c.Assert(err, gc.IsNil) |
879 | @@ -147,7 +147,7 @@ |
880 | oldTools := envtesting.PrimeTools(c, stor, s.DataDir(), version.MustParseBinary("5.4.3-precise-amd64")) |
881 | newTools := envtesting.AssertUploadFakeToolsVersions( |
882 | c, stor, version.MustParseBinary("5.4.5-precise-amd64"))[0] |
883 | - err := envtools.WriteMetadata(coretools.List{oldTools, newTools}, true, stor) |
884 | + err := envtools.MergeAndWriteMetadata(stor, coretools.List{oldTools, newTools}) |
885 | c.Assert(err, gc.IsNil) |
886 | err = statetesting.SetAgentVersion(s.State, newTools.Version.Number) |
887 | c.Assert(err, gc.IsNil) |
888 | @@ -202,7 +202,7 @@ |
889 | } |
890 | stor := s.Conn.Environ.Storage() |
891 | newTools := envtesting.PrimeTools(c, stor, s.DataDir(), version.MustParseBinary("5.4.3-precise-amd64")) |
892 | - err := envtools.WriteMetadata(coretools.List{newTools}, true, stor) |
893 | + err := envtools.MergeAndWriteMetadata(stor, coretools.List{newTools}) |
894 | c.Assert(err, gc.IsNil) |
895 | ugErr := &upgrader.UpgradeReadyError{ |
896 | AgentName: "anAgent", |
Reviewers: mp+189767_ code.launchpad. net,
Message:
Please take a look.
Description:
Fix tools metadata generation for null provider
There's two functional changes here:
- If there's existing tools metadata, use it
if it has sha256/size in it, and the tools
haven't been replaced.
- Don't use net/http or the tools URL when
fetching tools to compute sha256/size. That
doesn't work with sshstorage.
Additionally, I have refactored tools/simplestr eams.go, to make the tadata)
environs/
intent of WriteMetadata (now MergeAndWriteMe
more obvious. That function is now composed of
several other more specific functions.
Fixes #1235717
https:/ /code.launchpad .net/~axwalk/ juju-core/ lp1235717- refactor- simplestreams- writemetadata/ +merge/ 189767
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/14527043/
Affected files (+171, -122 lines): agent_test. go juju-metadata/ toolsmetadata. go juju-metadata/ toolsmetadata_ test.go sync/sync. go testing/ tools.go tools/simplestr eams.go tools/simplestr eams_test. go tools/tools. go upgrader/ upgrader_ test.go
A [revision details]
M cmd/jujud/
M cmd/plugins/
M cmd/plugins/
M environs/
M environs/
M environs/
M environs/
M environs/
M worker/