Switch over to gps
diff --git a/glide.lock b/glide.lock
index da12611..951824d 100644
--- a/glide.lock
+++ b/glide.lock
@@ -9,8 +9,8 @@
version: 0a2c9fc0eee2c4cbb9526877c4a54da047fdcadd
- name: github.com/Masterminds/vcs
version: 7af28b64c5ec41b1558f5514fd938379822c237c
-- name: github.com/sdboyer/vsolver
- version: 4a1c3dd00ed484b3e87b4668b357e531b36baaa8
+- name: github.com/sdboyer/gps
+ version: a868c10855893c21ed05d0f50d6f9acb12b6366d
- name: github.com/termie/go-shutil
version: bcacb06fecaeec8dc42af03c87c6949f4a05c74c
- name: gopkg.in/yaml.v2
diff --git a/glide.yaml b/glide.yaml
index 5efef43..7a966ad 100644
--- a/glide.yaml
+++ b/glide.yaml
@@ -17,5 +17,5 @@
version: ~1.14.0
- package: github.com/Masterminds/semver
branch: 2.x
-- package: github.com/sdboyer/vsolver
+- package: github.com/sdboyer/gps
branch: master
diff --git a/vendor/github.com/sdboyer/vsolver/.gitignore b/vendor/github.com/sdboyer/gps/.gitignore
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/.gitignore
rename to vendor/github.com/sdboyer/gps/.gitignore
diff --git a/vendor/github.com/sdboyer/gps/CODE_OF_CONDUCT.md b/vendor/github.com/sdboyer/gps/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..660ee84
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/CODE_OF_CONDUCT.md
@@ -0,0 +1,74 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as
+contributors and maintainers pledge to making participation in our project and
+our community a harassment-free experience for everyone, regardless of age, body
+size, disability, ethnicity, gender identity and expression, level of
+experience, nationality, personal appearance, race, religion, or sexual identity
+and orientation.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment
+include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy towards other community members
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or
+ advances
+* Trolling, insulting/derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or electronic
+ address, without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a
+ professional setting
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable
+behavior and are expected to take appropriate and fair corrective action in
+response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions that are
+not aligned to this Code of Conduct, or to ban temporarily or permanently any
+contributor for other behaviors that they deem inappropriate, threatening,
+offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community. Examples of
+representing a project or community include using an official project e-mail
+address, posting via an official social media account, or acting as an appointed
+representative at an online or offline event. Representation of a project may be
+further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting the project team at sam (at) samboyer.org. All complaints
+will be reviewed and investigated and will result in a response that is deemed
+necessary and appropriate to the circumstances. The project team is obligated to
+maintain confidentiality with regard to the reporter of an incident. Further
+details of specific enforcement policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good
+faith may face temporary or permanent repercussions as determined by other
+members of the project's leadership.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage],
+version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
+
+[homepage]: http://contributor-covenant.org
+[version]: http://contributor-covenant.org/version/1/4/
diff --git a/vendor/github.com/sdboyer/gps/CONTRIBUTING.md b/vendor/github.com/sdboyer/gps/CONTRIBUTING.md
new file mode 100644
index 0000000..3ff03b3
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/CONTRIBUTING.md
@@ -0,0 +1,58 @@
+# Contributing to `gps`
+
+:+1::tada: First, we're thrilled you're thinking about contributing! :tada::+1:
+
+As a library trying to cover all the bases in Go package management, it's
+crucial that we incorporate a broad range of experiences and use cases. There is
+a strong, motivating design behind `gps`, but we are always open to discussion
+on ways we can improve the library, particularly if it allows `gps` to cover
+more of the Go package management possibility space.
+
+`gps` has no CLA, but we do have a [Code of Conduct](https://github.com/sdboyer/gps/blob/master/CODE_OF_CONDUCT.md). By
+participating, you are expected to uphold this code.
+
+## How can I contribute?
+
+It may be best to start by getting a handle on what `gps` actually is. Our
+wiki has a [general introduction](https://github.com/sdboyer/gps/wiki/Introduction-to-gps), a
+[guide for tool implementors](https://github.com/sdboyer/gps/wiki/gps-for-Implementors), and
+a [guide for contributors](https://github.com/sdboyer/gps/wiki/gps-for-contributors).
+There's also a [discursive essay](https://medium.com/@sdboyer/so-you-want-to-write-a-package-manager-4ae9c17d9527)
+that lays out the big-picture goals and considerations driving the `gps` design.
+
+There are a number of ways to contribute, all highly valuable and deeply
+appreciated:
+
+* **Helping "translate" existing issues:** as `gps` exits its larval stage, it still
+ has a number of issues that may be incomprehensible to everyone except
+ @sdboyer. Simply asking clarifying questions on these issues is helpful!
+* **Identifying missed use cases:** the loose `gps` rule of thumb is, "if you can do
+ it in Go, we support it in `gps`." Posting issues about cases we've missed
+ helps us reach that goal.
+* **Writing tests:** in the same vein, `gps` has a [large suite](https://github.com/sdboyer/gps/blob/master/CODE_OF_CONDUCT.md) of solving tests, but
+ they still only scratch the surface. Writing tests is not only helpful, but is
+ also a great way to get a feel for how `gps` works.
+* **Suggesting enhancements:** `gps` has plenty of missing chunks. Help fill them in!
+* **Reporting bugs**: `gps` being a library means this isn't always the easiest.
+ However, you could always compile the [example](https://github.com/sdboyer/gps/blob/master/example.go), run that against some of
+ your projects, and report problems you encounter.
+* **Building experimental tools with `gps`:** probably the best and fastest ways to
+ kick the tires!
+
+`gps` is still beta-ish software. There are plenty of bugs to squash! APIs are
+stabilizing, but are still subject to change.
+
+## Issues and Pull Requests
+
+Pull requests are the preferred way to submit changes to 'gps'. Unless the
+changes are quite small, pull requests should generally reference an
+already-opened issue. Make sure to explain clearly in the body of the PR what
+the reasoning behind the change is.
+
+The changes themselves should generally conform to the following guidelines:
+
+* Git commit messages should be [well-written](http://chris.beams.io/posts/git-commit/#seven-rules).
+* Code should be `gofmt`-ed.
+* New or changed logic should be accompanied by tests.
+* Maintainable, table-based tests are strongly preferred, even if it means
+ writing a new testing harness to execute them.
diff --git a/vendor/github.com/sdboyer/vsolver/LICENSE b/vendor/github.com/sdboyer/gps/LICENSE
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/LICENSE
rename to vendor/github.com/sdboyer/gps/LICENSE
diff --git a/vendor/github.com/sdboyer/gps/README.md b/vendor/github.com/sdboyer/gps/README.md
new file mode 100644
index 0000000..227bf6b
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/README.md
@@ -0,0 +1,95 @@
+# gps
+
+--
+
+[](https://circleci.com/gh/sdboyer/gps) [](https://goreportcard.com/report/github.com/sdboyer/gps) [](https://godoc.org/github.com/sdboyer/gps)
+
+`gps` is the Go Packaging Solver. It is an engine for tackling dependency
+management problems in Go. You can replicate the fetching bits of `go get`,
+modulo arguments, [in about 30 lines of
+code](https://github.com/sdboyer/gps/blob/master/example.go) with `gps`.
+
+`gps` is _not_ Yet Another Go Package Management Tool. Rather, it's a library
+that package management (and adjacent) tools can use to solve the
+[hard](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem) parts of
+the problem in a consistent,
+[holistic](https://medium.com/@sdboyer/so-you-want-to-write-a-package-manager-4ae9c17d9527)
+way. `gps` is [on track](https://github.com/Masterminds/glide/pull/384) to become the engine behind [glide](https://glide.sh).
+
+The wiki has a [general introduction to the `gps`
+approach](https://github.com/sdboyer/gps/wiki/Introduction-to-gps), as well
+as guides for folks [implementing
+tools](https://github.com/sdboyer/gps/wiki/gps-for-Implementors) or [looking
+to contribute](https://github.com/sdboyer/gps/wiki/Introduction-to-gps).
+
+**`gps` is progressing rapidly, but still beta, with a liberal sprinkling of panics.**
+
+## Wait...a package management _library_?!
+
+Yup. Because it's what the Go ecosystem needs right now.
+
+There are [scads of
+tools](https://github.com/golang/go/wiki/PackageManagementTools) out there, each
+tackling some slice of the Go package management domain. Some handle more than
+others, some impose more restrictions than others, and most are mutually
+incompatible (or mutually indifferent, which amounts to the same). This
+fragments the Go FLOSS ecosystem, harming the community as a whole.
+
+As in all epic software arguments, some of the points of disagreement between
+tools/their authors are a bit silly. Many, though, are based on legitimate
+differences of opinion about what workflows, controls, and interfaces are
+best to give Go developers.
+
+Now, we're certainly no less opinionated than anyone else. But part of the
+challenge has been that, with a problem as
+[complex](https://medium.com/@sdboyer/so-you-want-to-write-a-package-manager-4ae9c17d9527)
+as package management, subtle design decisions made in pursuit of a particular
+workflow or interface can have far-reaching effects on architecture, leading to
+deep incompatibilities between tools and approaches.
+
+We believe that many of [these
+differences](https://docs.google.com/document/d/1xrV9D5u8AKu1ip-A1W9JqhUmmeOhoI6d6zjVwvdn5mc/edit?usp=sharing)
+are incidental - and, given the right general solution, reconcilable. `gps` is
+our attempt at such a solution.
+
+By separating out the underlying problem into a standalone library, we are
+hoping to provide a common foundation for different tools. Such a foundation
+could improve interoperability, reduce harm to the ecosystem, and make the
+communal process of figuring out what's right for Go more about collaboration,
+and less about fiefdoms.
+
+### Assumptions
+
+Ideally, `gps` could provide this shared foundation with no additional
+assumptions beyond pure Go source files. Sadly, package management is too
+complex to be assumption-less. So, `gps` tries to keep its assumptions to the
+minimum, supporting as many situations as possible while still maintaining a
+predictable, well-formed system.
+
+* Go 1.6, or 1.5 with `GO15VENDOREXPERIMENT = 1` set. `vendor/`
+ directories are a requirement.
+* You don't manually change what's under `vendor/`. That’s tooling’s
+ job.
+* A **project** concept, where projects comprise the set of Go packages in a
+ rooted directory tree. By happy (not) accident, `vendor/` directories also
+ just happen to cover a rooted tree.
+* A [**manifest**](https://godoc.org/github.com/sdboyer/gps#Manifest) and
+ [**lock**](https://godoc.org/github.com/sdboyer/gps#Lock) approach to
+ tracking version and constraint information. The solver takes manifest (and,
+ optionally, lock)-type data as inputs, and produces lock-type data as its
+ output. Tools decide how to actually store this data, but these should
+ generally be at the root of the project tree.
+
+Manifests? Locks? Eeew. Yes, we also think it'd be swell if we didn't need
+metadata files. We love the idea of Go packages as standalone, self-describing
+code. Unfortunately, the wheels come off that idea as soon as versioning and
+cross-project/repository dependencies happen. But universe alignment is hard;
+trying to intermix version information directly with the code would only make
+matters worse.
+
+## Contributing
+
+Yay, contributing! Please see
+[CONTRIBUTING.md](https://github.com/sdboyer/gps/blob/master/CONTRIBUTING.md).
+Note that `gps` also abides by a [Code of
+Conduct](https://github.com/sdboyer/gps/blob/master/CODE_OF_CONDUCT.md), and is MIT-licensed.
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/disallow/.m1p/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/disallow/.m1p/a.go
new file mode 100644
index 0000000..e4e2ced
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/disallow/.m1p/a.go
@@ -0,0 +1,12 @@
+package m1p
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ S = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/disallow/.m1p/b.go
similarity index 100%
copy from vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go
copy to vendor/github.com/sdboyer/gps/_testdata/src/disallow/.m1p/b.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/disallow/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/disallow/a.go
new file mode 100644
index 0000000..59d2f72
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/disallow/a.go
@@ -0,0 +1,14 @@
+package disallow
+
+import (
+ "sort"
+ "disallow/testdata"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+ _ = testdata.H
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/disallow/testdata/another.go b/vendor/github.com/sdboyer/gps/_testdata/src/disallow/testdata/another.go
new file mode 100644
index 0000000..6defdae
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/disallow/testdata/another.go
@@ -0,0 +1,7 @@
+package testdata
+
+import "hash"
+
+var (
+ H = hash.Hash
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/a.go
new file mode 100644
index 0000000..04cac6a
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/a.go
@@ -0,0 +1,12 @@
+package base
+
+import (
+ "go/parser"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = parser.ParseFile
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/namemismatch/m1p/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/namemismatch/m1p/a.go
new file mode 100644
index 0000000..ec1f9b9
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/namemismatch/m1p/a.go
@@ -0,0 +1,12 @@
+package m1p
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/namemismatch/m1p/b.go
similarity index 100%
copy from vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go
copy to vendor/github.com/sdboyer/gps/_testdata/src/doublenest/namemismatch/m1p/b.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/namemismatch/nm.go b/vendor/github.com/sdboyer/gps/_testdata/src/doublenest/namemismatch/nm.go
similarity index 100%
copy from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/namemismatch/nm.go
copy to vendor/github.com/sdboyer/gps/_testdata/src/doublenest/namemismatch/nm.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/empty/.gitkeep b/vendor/github.com/sdboyer/gps/_testdata/src/empty/.gitkeep
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/empty/.gitkeep
rename to vendor/github.com/sdboyer/gps/_testdata/src/empty/.gitkeep
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/igmain/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/igmain/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/igmain/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/igmain/igmain.go b/vendor/github.com/sdboyer/gps/_testdata/src/igmain/igmain.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/igmain/igmain.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/igmain/igmain.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/igmainlong/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/igmainlong/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/igmainlong/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/igmainlong/igmain.go b/vendor/github.com/sdboyer/gps/_testdata/src/igmainlong/igmain.go
new file mode 100644
index 0000000..efee3f9
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/igmainlong/igmain.go
@@ -0,0 +1,9 @@
+// Another comment, which the parser should ignore and still see builds tags
+
+// +build ignore
+
+package main
+
+import "unicode"
+
+var _ = unicode.In
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/igmaint/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/igmaint/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/igmaint/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/igmaint/igmain.go b/vendor/github.com/sdboyer/gps/_testdata/src/igmaint/igmain.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/igmaint/igmain.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/igmaint/igmain.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/igmaint/t_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/igmaint/t_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/igmaint/t_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/igmaint/t_test.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/m1p/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/m1p/a.go
new file mode 100644
index 0000000..ec1f9b9
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/m1p/a.go
@@ -0,0 +1,12 @@
+package m1p
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/m1p/b.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/m1p/b.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/missing/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/missing/a.go
new file mode 100644
index 0000000..8522bdd
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/missing/a.go
@@ -0,0 +1,14 @@
+package simple
+
+import (
+ "sort"
+
+ "missing/missing"
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+ _ = missing.Foo
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/missing/m1p/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/missing/m1p/a.go
new file mode 100644
index 0000000..ec1f9b9
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/missing/m1p/a.go
@@ -0,0 +1,12 @@
+package m1p
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/missing/m1p/b.go
similarity index 100%
copy from vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/b.go
copy to vendor/github.com/sdboyer/gps/_testdata/src/missing/m1p/b.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/nest/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/nest/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/nest/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/nest/m1p/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/nest/m1p/a.go
new file mode 100644
index 0000000..ec1f9b9
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/nest/m1p/a.go
@@ -0,0 +1,12 @@
+package m1p
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/nest/m1p/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/nest/m1p/b.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/nest/m1p/b.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/nest/m1p/b.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/ren/m1p/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/ren/m1p/a.go
new file mode 100644
index 0000000..ec1f9b9
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/ren/m1p/a.go
@@ -0,0 +1,12 @@
+package m1p
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/ren/m1p/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/ren/m1p/b.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/ren/m1p/b.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/ren/m1p/b.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/ren/simple/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/ren/simple/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/ren/simple/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/simple/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/simple/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/simple/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/simpleallt/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/simpleallt/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/simpleallt/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simpleallt/a_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/simpleallt/a_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/simpleallt/a_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/simpleallt/a_test.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simpleallt/t_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/simpleallt/t_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/simpleallt/t_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/simpleallt/t_test.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/simplet/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/simplet/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/simplet/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simplet/t_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/simplet/t_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/simplet/t_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/simplet/t_test.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/simplext/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/simplext/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/simplext/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simplext/a_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/simplext/a_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/simplext/a_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/simplext/a_test.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/t/t_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/t/t_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/t/t_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/t/t_test.go
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/twopkgs/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/twopkgs/a.go
new file mode 100644
index 0000000..300b730
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/twopkgs/a.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/twopkgs/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/twopkgs/b.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/twopkgs/b.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/twopkgs/b.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/locals.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/locals.go
similarity index 80%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/locals.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/locals.go
index 3f73943..5c7e6c7 100644
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/locals.go
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/varied/locals.go
@@ -1,13 +1,13 @@
package main
import (
- "varied/otherpath"
"varied/namemismatch"
+ "varied/otherpath"
"varied/simple"
)
var (
- _ = simple.S
- _ = nm.V
+ _ = simple.S
+ _ = nm.V
_ = otherpath.O
)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/varied/m1p/a.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/m1p/a.go
new file mode 100644
index 0000000..65fd7ca
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/varied/m1p/a.go
@@ -0,0 +1,12 @@
+package m1p
+
+import (
+ "sort"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ M = sort.Strings
+ _ = gps.Solve
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/m1p/b.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/m1p/b.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/m1p/b.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/m1p/b.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/main.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/main.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/main.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/main.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/namemismatch/nm.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/namemismatch/nm.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/namemismatch/nm.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/namemismatch/nm.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/otherpath/otherpath_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/otherpath/otherpath_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/otherpath/otherpath_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/otherpath/otherpath_test.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/another/another.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/another/another.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/another/another.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/another/another.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/another/another_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/another/another_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/another/another_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/another/another_test.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/another/locals.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/another/locals.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/another/locals.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/another/locals.go
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/locals.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/locals.go
similarity index 77%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/locals.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/locals.go
index 7717e80..6ebb90f 100644
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/locals.go
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/locals.go
@@ -3,5 +3,5 @@
import "varied/simple/another"
var (
- _ = another.H
+ _ = another.H
)
diff --git a/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/simple.go b/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/simple.go
new file mode 100644
index 0000000..c8fbb05
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/_testdata/src/varied/simple/simple.go
@@ -0,0 +1,12 @@
+package simple
+
+import (
+ "go/parser"
+
+ "github.com/sdboyer/gps"
+)
+
+var (
+ _ = parser.ParseFile
+ S = gps.Prepare
+)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/xt/a_test.go b/vendor/github.com/sdboyer/gps/_testdata/src/xt/a_test.go
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/_testdata/src/xt/a_test.go
rename to vendor/github.com/sdboyer/gps/_testdata/src/xt/a_test.go
diff --git a/vendor/github.com/sdboyer/gps/analysis.go b/vendor/github.com/sdboyer/gps/analysis.go
new file mode 100644
index 0000000..0cb93ba
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/analysis.go
@@ -0,0 +1,950 @@
+package gps
+
+import (
+ "bytes"
+ "fmt"
+ "go/build"
+ "io"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "sort"
+ "strings"
+ "text/scanner"
+)
+
+var osList []string
+var archList []string
+var stdlib = make(map[string]bool)
+
+const stdlibPkgs string = "archive archive/tar archive/zip bufio builtin bytes compress compress/bzip2 compress/flate compress/gzip compress/lzw compress/zlib container container/heap container/list container/ring context crypto crypto/aes crypto/cipher crypto/des crypto/dsa crypto/ecdsa crypto/elliptic crypto/hmac crypto/md5 crypto/rand crypto/rc4 crypto/rsa crypto/sha1 crypto/sha256 crypto/sha512 crypto/subtle crypto/tls crypto/x509 crypto/x509/pkix database database/sql database/sql/driver debug debug/dwarf debug/elf debug/gosym debug/macho debug/pe debug/plan9obj encoding encoding/ascii85 encoding/asn1 encoding/base32 encoding/base64 encoding/binary encoding/csv encoding/gob encoding/hex encoding/json encoding/pem encoding/xml errors expvar flag fmt go go/ast go/build go/constant go/doc go/format go/importer go/parser go/printer go/scanner go/token go/types hash hash/adler32 hash/crc32 hash/crc64 hash/fnv html html/template image image/color image/color/palette image/draw image/gif image/jpeg image/png index index/suffixarray io io/ioutil log log/syslog math math/big math/cmplx math/rand mime mime/multipart mime/quotedprintable net net/http net/http/cgi net/http/cookiejar net/http/fcgi net/http/httptest net/http/httputil net/http/pprof net/mail net/rpc net/rpc/jsonrpc net/smtp net/textproto net/url os os/exec os/signal os/user path path/filepath reflect regexp regexp/syntax runtime runtime/cgo runtime/debug runtime/msan runtime/pprof runtime/race runtime/trace sort strconv strings sync sync/atomic syscall testing testing/iotest testing/quick text text/scanner text/tabwriter text/template text/template/parse time unicode unicode/utf16 unicode/utf8 unsafe"
+
+// Before appengine moved to google.golang.org/appengine, it had a magic
+// stdlib-like import path. We have to ignore all of these.
+const appenginePkgs string = "appengine/aetest appengine/blobstore appengine/capability appengine/channel appengine/cloudsql appengine/cmd appengine/cmd/aebundler appengine/cmd/aedeploy appengine/cmd/aefix appengine/datastore appengine/delay appengine/demos appengine/demos/guestbook appengine/demos/guestbook/templates appengine/demos/helloworld appengine/file appengine/image appengine/internal appengine/internal/aetesting appengine/internal/app_identity appengine/internal/base appengine/internal/blobstore appengine/internal/capability appengine/internal/channel appengine/internal/datastore appengine/internal/image appengine/internal/log appengine/internal/mail appengine/internal/memcache appengine/internal/modules appengine/internal/remote_api appengine/internal/search appengine/internal/socket appengine/internal/system appengine/internal/taskqueue appengine/internal/urlfetch appengine/internal/user appengine/internal/xmpp appengine/log appengine/mail appengine/memcache appengine/module appengine/remote_api appengine/runtime appengine/search appengine/socket appengine/taskqueue appengine/urlfetch appengine/user appengine/xmpp"
+
+func init() {
+ // The supported systems are listed in
+ // https://github.com/golang/go/blob/master/src/go/build/syslist.go
+ // The lists are not exported so we need to duplicate them here.
+ osListString := "android darwin dragonfly freebsd linux nacl netbsd openbsd plan9 solaris windows"
+ osList = strings.Split(osListString, " ")
+
+ archListString := "386 amd64 amd64p32 arm armbe arm64 arm64be ppc64 ppc64le mips mipsle mips64 mips64le mips64p32 mips64p32le ppc s390 s390x sparc sparc64"
+ archList = strings.Split(archListString, " ")
+
+ for _, pkg := range strings.Split(stdlibPkgs, " ") {
+ stdlib[pkg] = true
+ }
+ for _, pkg := range strings.Split(appenginePkgs, " ") {
+ stdlib[pkg] = true
+ }
+
+ // Also ignore C
+ // TODO(sdboyer) actually figure out how to deal with cgo
+ stdlib["C"] = true
+}
+
+// listPackages lists info for all packages at or below the provided fileRoot.
+//
+// Directories without any valid Go files are excluded. Directories with
+// multiple packages are excluded.
+//
+// The importRoot parameter is prepended to the relative path when determining
+// the import path for each package. The obvious case is for something typical,
+// like:
+//
+// fileRoot = "/home/user/go/src/github.com/foo/bar"
+// importRoot = "github.com/foo/bar"
+//
+// where the fileRoot and importRoot align. However, if you provide:
+//
+// fileRoot = "/home/user/workspace/path/to/repo"
+// importRoot = "github.com/foo/bar"
+//
+// then the root package at path/to/repo will be ascribed import path
+// "github.com/foo/bar", and its subpackage "baz" will be
+// "github.com/foo/bar/baz".
+//
+// A PackageTree is returned, which contains the ImportRoot and map of import path
+// to PackageOrErr - each path under the root that exists will have either a
+// Package, or an error describing why the directory is not a valid package.
+func listPackages(fileRoot, importRoot string) (PackageTree, error) {
+ // Set up a build.ctx for parsing
+ ctx := build.Default
+ ctx.GOROOT = ""
+ ctx.GOPATH = ""
+ ctx.UseAllFiles = true
+
+ ptree := PackageTree{
+ ImportRoot: importRoot,
+ Packages: make(map[string]PackageOrErr),
+ }
+
+ // mkfilter returns two funcs that can be injected into a build.Context,
+ // letting us filter the results into an "in" and "out" set.
+ mkfilter := func(files map[string]struct{}) (in, out func(dir string) (fi []os.FileInfo, err error)) {
+ in = func(dir string) (fi []os.FileInfo, err error) {
+ all, err := ioutil.ReadDir(dir)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, f := range all {
+ if _, exists := files[f.Name()]; exists {
+ fi = append(fi, f)
+ }
+ }
+ return fi, nil
+ }
+
+ out = func(dir string) (fi []os.FileInfo, err error) {
+ all, err := ioutil.ReadDir(dir)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, f := range all {
+ if _, exists := files[f.Name()]; !exists {
+ fi = append(fi, f)
+ }
+ }
+ return fi, nil
+ }
+
+ return
+ }
+
+ // helper func to create a Package from a *build.Package
+ happy := func(importPath string, p *build.Package) Package {
+ // Happy path - simple parsing worked
+ pkg := Package{
+ ImportPath: importPath,
+ CommentPath: p.ImportComment,
+ Name: p.Name,
+ Imports: p.Imports,
+ TestImports: dedupeStrings(p.TestImports, p.XTestImports),
+ }
+
+ return pkg
+ }
+
+ err := filepath.Walk(fileRoot, func(path string, fi os.FileInfo, err error) error {
+ if err != nil && err != filepath.SkipDir {
+ return err
+ }
+ if !fi.IsDir() {
+ return nil
+ }
+
+ // Skip dirs that are known to hold non-local/dependency code.
+ //
+ // We don't skip _*, or testdata dirs because, while it may be poor
+ // form, importing them is not a compilation error.
+ switch fi.Name() {
+ case "vendor", "Godeps":
+ return filepath.SkipDir
+ }
+ // We do skip dot-dirs, though, because it's such a ubiquitous standard
+ // that they not be visited by normal commands, and because things get
+ // really weird if we don't.
+ //
+ // TODO(sdboyer) does this entail that we should chuck dot-led import
+ // paths later on?
+ if strings.HasPrefix(fi.Name(), ".") {
+ return filepath.SkipDir
+ }
+
+ // Compute the import path. Run the result through ToSlash(), so that windows
+ // paths are normalized to Unix separators, as import paths are expected
+ // to be.
+ ip := filepath.ToSlash(filepath.Join(importRoot, strings.TrimPrefix(path, fileRoot)))
+
+ // Find all the imports, across all os/arch combos
+ p, err := ctx.ImportDir(path, analysisImportMode())
+ var pkg Package
+ if err == nil {
+ pkg = happy(ip, p)
+ } else {
+ switch terr := err.(type) {
+ case *build.NoGoError:
+ ptree.Packages[ip] = PackageOrErr{
+ Err: err,
+ }
+ return nil
+ case *build.MultiplePackageError:
+ // Set this up preemptively, so we can easily just return out if
+ // something goes wrong. Otherwise, it'll get transparently
+ // overwritten later.
+ ptree.Packages[ip] = PackageOrErr{
+ Err: err,
+ }
+
+ // For now, we're punting entirely on dealing with os/arch
+ // combinations. That will be a more significant refactor.
+ //
+ // However, there is one case we want to allow here - one or
+ // more files with "+build ignore" with package `main`. (Ignore
+ // is just a convention, but for now it's good enough to just
+ // check that.) This is a fairly common way to give examples,
+ // and to make a more sophisticated build system than a Makefile
+ // allows, so we want to support that case. So, transparently
+ // lump the deps together.
+ mains := make(map[string]struct{})
+ for k, pkgname := range terr.Packages {
+ if pkgname == "main" {
+ tags, err2 := readFileBuildTags(filepath.Join(path, terr.Files[k]))
+ if err2 != nil {
+ return nil
+ }
+
+ var hasignore bool
+ for _, t := range tags {
+ if t == "ignore" {
+ hasignore = true
+ break
+ }
+ }
+ if !hasignore {
+ // No ignore tag found - bail out
+ return nil
+ }
+ mains[terr.Files[k]] = struct{}{}
+ }
+ }
+ // Make filtering funcs that will let us look only at the main
+ // files, and exclude the main files; inf and outf, respectively
+ inf, outf := mkfilter(mains)
+
+ // outf first; if there's another err there, we bail out with a
+ // return
+ ctx.ReadDir = outf
+ po, err2 := ctx.ImportDir(path, analysisImportMode())
+ if err2 != nil {
+ return nil
+ }
+ ctx.ReadDir = inf
+ pi, err2 := ctx.ImportDir(path, analysisImportMode())
+ if err2 != nil {
+ return nil
+ }
+ ctx.ReadDir = nil
+
+ // Use the other files as baseline, they're the main stuff
+ pkg = happy(ip, po)
+ mpkg := happy(ip, pi)
+ pkg.Imports = dedupeStrings(pkg.Imports, mpkg.Imports)
+ pkg.TestImports = dedupeStrings(pkg.TestImports, mpkg.TestImports)
+ default:
+ return err
+ }
+ }
+
+ // This area has some...fuzzy rules, but check all the imports for
+ // local/relative/dot-ness, and record an error for the package if we
+ // see any.
+ var lim []string
+ for _, imp := range append(pkg.Imports, pkg.TestImports...) {
+ switch {
+ // Do allow the single-dot, at least for now
+ case imp == "..":
+ lim = append(lim, imp)
+ // ignore stdlib done this way, b/c that's what the go tooling does
+ case strings.HasPrefix(imp, "./"):
+ if stdlib[imp[2:]] {
+ lim = append(lim, imp)
+ }
+ case strings.HasPrefix(imp, "../"):
+ if stdlib[imp[3:]] {
+ lim = append(lim, imp)
+ }
+ }
+ }
+
+ if len(lim) > 0 {
+ ptree.Packages[ip] = PackageOrErr{
+ Err: &LocalImportsError{
+ Dir: ip,
+ LocalImports: lim,
+ },
+ }
+ } else {
+ ptree.Packages[ip] = PackageOrErr{
+ P: pkg,
+ }
+ }
+
+ return nil
+ })
+
+ if err != nil {
+ return PackageTree{}, err
+ }
+
+ return ptree, nil
+}
+
+// LocalImportsError indicates that a package contains at least one relative
+// import that will prevent it from compiling.
+//
+// TODO(sdboyer) add a Files property once we're doing our own per-file parsing
+type LocalImportsError struct {
+ Dir string
+ LocalImports []string
+}
+
+func (e *LocalImportsError) Error() string {
+ return fmt.Sprintf("import path %s had problematic local imports", e.Dir)
+}
+
+type wm struct {
+ err error
+ ex map[string]bool
+ in map[string]bool
+}
+
+// wmToReach takes an externalReach()-style workmap and transitively walks all
+// internal imports until they reach an external path or terminate, then
+// translates the results into a slice of external imports for each internal
+// pkg.
+//
+// The basedir string, with a trailing slash ensured, will be stripped from the
+// keys of the returned map.
+func wmToReach(workmap map[string]wm, basedir string) map[string][]string {
+ // Uses depth-first exploration to compute reachability into external
+ // packages, dropping any internal packages on "poisoned paths" - a path
+ // containing a package with an error, or with a dep on an internal package
+ // that's missing.
+
+ const (
+ white uint8 = iota
+ grey
+ black
+ )
+
+ colors := make(map[string]uint8)
+ allreachsets := make(map[string]map[string]struct{})
+
+ // poison is a helper func to eliminate specific reachsets from allreachsets
+ poison := func(path []string) {
+ for _, ppkg := range path {
+ delete(allreachsets, ppkg)
+ }
+ }
+
+ var dfe func(string, []string) bool
+
+ // dfe is the depth-first-explorer that computes safe, error-free external
+ // reach map.
+ //
+ // pkg is the import path of the pkg currently being visited; path is the
+ // stack of parent packages we've visited to get to pkg. The return value
+ // indicates whether the level completed successfully (true) or if it was
+ // poisoned (false).
+ //
+ // TODO(sdboyer) some deft improvements could probably be made by passing the list of
+ // parent reachsets, rather than a list of parent package string names.
+ // might be able to eliminate the use of allreachsets map-of-maps entirely.
+ dfe = func(pkg string, path []string) bool {
+ // white is the zero value of uint8, which is what we want if the pkg
+ // isn't in the colors map, so this works fine
+ switch colors[pkg] {
+ case white:
+ // first visit to this pkg; mark it as in-process (grey)
+ colors[pkg] = grey
+
+ // make sure it's present and w/out errs
+ w, exists := workmap[pkg]
+ if !exists || w.err != nil {
+ // Does not exist or has an err; poison self and all parents
+ poison(path)
+
+ // we know we're done here, so mark it black
+ colors[pkg] = black
+ return false
+ }
+ // pkg exists with no errs. mark it as in-process (grey), and start
+ // a reachmap for it
+ //
+ // TODO(sdboyer) use sync.Pool here? can be lots of explicit map alloc/dealloc
+ rs := make(map[string]struct{})
+
+ // Push self onto the path slice. Passing this as a value has the
+ // effect of auto-popping the slice, while also giving us safe
+ // memory reuse.
+ path = append(path, pkg)
+
+ // Dump this package's external pkgs into its own reachset. Separate
+ // loop from the parent dump to avoid nested map loop lookups.
+ for ex := range w.ex {
+ rs[ex] = struct{}{}
+ }
+ allreachsets[pkg] = rs
+
+ // Push this pkg's external imports into all parent reachsets. Not
+ // all parents will necessarily have a reachset; none, some, or all
+ // could have been poisoned by a different path than what we're on
+ // right now. (Or we could be at depth 0)
+ for _, ppkg := range path {
+ if prs, exists := allreachsets[ppkg]; exists {
+ for ex := range w.ex {
+ prs[ex] = struct{}{}
+ }
+ }
+ }
+
+ // Now, recurse until done, or a false bubbles up, indicating the
+ // path is poisoned.
+ var clean bool
+ for in := range w.in {
+ // It's possible, albeit weird, for a package to import itself.
+ // If we try to visit self, though, then it erroneously poisons
+ // the path, as it would be interpreted as grey. In reality,
+ // this becomes a no-op, so just skip it.
+ if in == pkg {
+ continue
+ }
+
+ clean = dfe(in, path)
+ if !clean {
+ // Path is poisoned. Our reachmap was already deleted by the
+ // path we're returning from; mark ourselves black, then
+ // bubble up the poison. This is OK to do early, before
+ // exploring all internal imports, because the outer loop
+ // visits all internal packages anyway.
+ //
+ // In fact, stopping early is preferable - white subpackages
+ // won't have to iterate pointlessly through a parent path
+ // with no reachset.
+ colors[pkg] = black
+ return false
+ }
+ }
+
+ // Fully done with this pkg; no transitive problems.
+ colors[pkg] = black
+ return true
+
+ case grey:
+ // grey means an import cycle; guaranteed badness right here.
+ //
+ // FIXME handle import cycles by dropping everything involved. i
+ // think we need to compute SCC, then drop *all* of them?
+ colors[pkg] = black
+ poison(append(path, pkg)) // poison self and parents
+
+ case black:
+ // black means we're done with the package. If it has an entry in
+ // allreachsets, it completed successfully. If not, it was poisoned,
+ // and we need to bubble the poison back up.
+ rs, exists := allreachsets[pkg]
+ if !exists {
+ // just poison parents; self was necessarily already poisoned
+ poison(path)
+ return false
+ }
+
+ // It's good; pull over of the external imports from its reachset
+ // into all non-poisoned parent reachsets
+ for _, ppkg := range path {
+ if prs, exists := allreachsets[ppkg]; exists {
+ for ex := range rs {
+ prs[ex] = struct{}{}
+ }
+ }
+ }
+ return true
+
+ default:
+ panic(fmt.Sprintf("invalid color marker %v for %s", colors[pkg], pkg))
+ }
+
+ // shouldn't ever hit this
+ return false
+ }
+
+ // Run the depth-first exploration.
+ //
+ // Don't bother computing graph sources, this straightforward loop works
+ // comparably well, and fits nicely with an escape hatch in the dfe.
+ var path []string
+ for pkg := range workmap {
+ dfe(pkg, path)
+ }
+
+ if len(allreachsets) == 0 {
+ return nil
+ }
+
+ // Flatten allreachsets into the final reachlist
+ rt := strings.TrimSuffix(basedir, string(os.PathSeparator)) + string(os.PathSeparator)
+ rm := make(map[string][]string)
+ for pkg, rs := range allreachsets {
+ rlen := len(rs)
+ if rlen == 0 {
+ rm[strings.TrimPrefix(pkg, rt)] = nil
+ continue
+ }
+
+ edeps := make([]string, rlen)
+ k := 0
+ for opkg := range rs {
+ edeps[k] = opkg
+ k++
+ }
+
+ sort.Strings(edeps)
+ rm[strings.TrimPrefix(pkg, rt)] = edeps
+ }
+
+ return rm
+}
+
+func readBuildTags(p string) ([]string, error) {
+ _, err := os.Stat(p)
+ if err != nil {
+ return []string{}, err
+ }
+
+ d, err := os.Open(p)
+ if err != nil {
+ return []string{}, err
+ }
+
+ objects, err := d.Readdir(-1)
+ if err != nil {
+ return []string{}, err
+ }
+
+ var tags []string
+ for _, obj := range objects {
+
+ // only process Go files
+ if strings.HasSuffix(obj.Name(), ".go") {
+ fp := filepath.Join(p, obj.Name())
+
+ co, err := readGoContents(fp)
+ if err != nil {
+ return []string{}, err
+ }
+
+ // Only look at places where we had a code comment.
+ if len(co) > 0 {
+ t := findTags(co)
+ for _, tg := range t {
+ found := false
+ for _, tt := range tags {
+ if tt == tg {
+ found = true
+ }
+ }
+ if !found {
+ tags = append(tags, tg)
+ }
+ }
+ }
+ }
+ }
+
+ return tags, nil
+}
+
+func readFileBuildTags(fp string) ([]string, error) {
+ co, err := readGoContents(fp)
+ if err != nil {
+ return []string{}, err
+ }
+
+ var tags []string
+ // Only look at places where we had a code comment.
+ if len(co) > 0 {
+ t := findTags(co)
+ for _, tg := range t {
+ found := false
+ for _, tt := range tags {
+ if tt == tg {
+ found = true
+ }
+ }
+ if !found {
+ tags = append(tags, tg)
+ }
+ }
+ }
+
+ return tags, nil
+}
+
+// Read contents of a Go file up to the package declaration. This can be used
+// to find the the build tags.
+func readGoContents(fp string) ([]byte, error) {
+ f, err := os.Open(fp)
+ defer f.Close()
+ if err != nil {
+ return []byte{}, err
+ }
+
+ var s scanner.Scanner
+ s.Init(f)
+ var tok rune
+ var pos scanner.Position
+ for tok != scanner.EOF {
+ tok = s.Scan()
+
+ // Getting the token text will skip comments by default.
+ tt := s.TokenText()
+ // build tags will not be after the package declaration.
+ if tt == "package" {
+ pos = s.Position
+ break
+ }
+ }
+
+ var buf bytes.Buffer
+ f.Seek(0, 0)
+ _, err = io.CopyN(&buf, f, int64(pos.Offset))
+ if err != nil {
+ return []byte{}, err
+ }
+
+ return buf.Bytes(), nil
+}
+
+// From a byte slice of a Go file find the tags.
+func findTags(co []byte) []string {
+ p := co
+ var tgs []string
+ for len(p) > 0 {
+ line := p
+ if i := bytes.IndexByte(line, '\n'); i >= 0 {
+ line, p = line[:i], p[i+1:]
+ } else {
+ p = p[len(p):]
+ }
+ line = bytes.TrimSpace(line)
+ // Only look at comment lines that are well formed in the Go style
+ if bytes.HasPrefix(line, []byte("//")) {
+ line = bytes.TrimSpace(line[len([]byte("//")):])
+ if len(line) > 0 && line[0] == '+' {
+ f := strings.Fields(string(line))
+
+ // We've found a +build tag line.
+ if f[0] == "+build" {
+ for _, tg := range f[1:] {
+ tgs = append(tgs, tg)
+ }
+ }
+ }
+ }
+ }
+
+ return tgs
+}
+
+// Get an OS value that's not the one passed in.
+func getOsValue(n string) string {
+ for _, o := range osList {
+ if o != n {
+ return o
+ }
+ }
+
+ return n
+}
+
+func isSupportedOs(n string) bool {
+ for _, o := range osList {
+ if o == n {
+ return true
+ }
+ }
+
+ return false
+}
+
+// Get an Arch value that's not the one passed in.
+func getArchValue(n string) string {
+ for _, o := range archList {
+ if o != n {
+ return o
+ }
+ }
+
+ return n
+}
+
+func isSupportedArch(n string) bool {
+ for _, o := range archList {
+ if o == n {
+ return true
+ }
+ }
+
+ return false
+}
+
+func ensureTrailingSlash(s string) string {
+ return strings.TrimSuffix(s, string(os.PathSeparator)) + string(os.PathSeparator)
+}
+
+// helper func to merge, dedupe, and sort strings
+func dedupeStrings(s1, s2 []string) (r []string) {
+ dedupe := make(map[string]bool)
+
+ if len(s1) > 0 && len(s2) > 0 {
+ for _, i := range s1 {
+ dedupe[i] = true
+ }
+ for _, i := range s2 {
+ dedupe[i] = true
+ }
+
+ for i := range dedupe {
+ r = append(r, i)
+ }
+ // And then re-sort them
+ sort.Strings(r)
+ } else if len(s1) > 0 {
+ r = s1
+ } else if len(s2) > 0 {
+ r = s2
+ }
+
+ return
+}
+
+// A PackageTree represents the results of recursively parsing a tree of
+// packages, starting at the ImportRoot. The results of parsing the files in the
+// directory identified by each import path - a Package or an error - are stored
+// in the Packages map, keyed by that import path.
+type PackageTree struct {
+ ImportRoot string
+ Packages map[string]PackageOrErr
+}
+
+// PackageOrErr stores the results of attempting to parse a single directory for
+// Go source code.
+type PackageOrErr struct {
+ P Package
+ Err error
+}
+
+// ExternalReach looks through a PackageTree and computes the list of external
+// packages (not logical children of PackageTree.ImportRoot) that are
+// transitively imported by the internal packages in the tree.
+//
+// main indicates whether (true) or not (false) to include main packages in the
+// analysis. main packages are generally excluded when analyzing anything other
+// than the root project, as they inherently can't be imported.
+//
+// tests indicates whether (true) or not (false) to include imports from test
+// files in packages when computing the reach map.
+//
+// ignore is a map of import paths that, if encountered, should be excluded from
+// analysis. This exclusion applies to both internal and external packages. If
+// an external import path is ignored, it is simply omitted from the results.
+//
+// If an internal path is ignored, then it is excluded from all transitive
+// dependency chains and does not appear as a key in the final map. That is, if
+// you ignore A/foo, then the external package list for all internal packages
+// that import A/foo will not include external packages that are only reachable
+// through A/foo.
+//
+// Visually, this means that, given a PackageTree with root A and packages at A,
+// A/foo, and A/bar, and the following import chain:
+//
+// A -> A/foo -> A/bar -> B/baz
+//
+// If you ignore A/foo, then the returned map would be:
+//
+// map[string][]string{
+// "A": []string{},
+// "A/bar": []string{"B/baz"},
+// }
+//
+// It is safe to pass a nil map if there are no packages to ignore.
+func (t PackageTree) ExternalReach(main, tests bool, ignore map[string]bool) map[string][]string {
+ if ignore == nil {
+ ignore = make(map[string]bool)
+ }
+
+ // world's simplest adjacency list
+ workmap := make(map[string]wm)
+
+ var imps []string
+ for ip, perr := range t.Packages {
+ if perr.Err != nil {
+ workmap[ip] = wm{
+ err: perr.Err,
+ }
+ continue
+ }
+ p := perr.P
+
+ // Skip main packages, unless param says otherwise
+ if p.Name == "main" && !main {
+ continue
+ }
+ // Skip ignored packages
+ if ignore[ip] {
+ continue
+ }
+
+ imps = imps[:0]
+ imps = p.Imports
+ if tests {
+ imps = dedupeStrings(imps, p.TestImports)
+ }
+
+ w := wm{
+ ex: make(map[string]bool),
+ in: make(map[string]bool),
+ }
+
+ for _, imp := range imps {
+ // Skip ignored imports
+ if ignore[imp] {
+ continue
+ }
+
+ if !checkPrefixSlash(filepath.Clean(imp), t.ImportRoot) {
+ w.ex[imp] = true
+ } else {
+ if w2, seen := workmap[imp]; seen {
+ for i := range w2.ex {
+ w.ex[i] = true
+ }
+ for i := range w2.in {
+ w.in[i] = true
+ }
+ } else {
+ w.in[imp] = true
+ }
+ }
+ }
+
+ workmap[ip] = w
+ }
+
+ //return wmToReach(workmap, t.ImportRoot)
+ return wmToReach(workmap, "") // TODO(sdboyer) this passes tests, but doesn't seem right
+}
+
+// ListExternalImports computes a sorted, deduplicated list of all the external
+// packages that are reachable through imports from all valid packages in the
+// PackageTree.
+//
+// main and tests determine whether main packages and test imports should be
+// included in the calculation. "External" is defined as anything not prefixed,
+// after path cleaning, by the PackageTree.ImportRoot. This includes stdlib.
+//
+// If an internal path is ignored, all of the external packages that it uniquely
+// imports are omitted. Note, however, that no internal transitivity checks are
+// made here - every non-ignored package in the tree is considered independently
+// (with one set of exceptions, noted below). That means, given a PackageTree
+// with root A and packages at A, A/foo, and A/bar, and the following import
+// chain:
+//
+// A -> A/foo -> A/bar -> B/baz
+//
+// If you ignore A or A/foo, A/bar will still be visited, and B/baz will be
+// returned, because this method visits ALL packages in the tree, not only those reachable
+// from the root (or any other) packages. If your use case requires interrogating
+// external imports with respect to only specific package entry points, you need
+// ExternalReach() instead.
+//
+// It is safe to pass a nil map if there are no packages to ignore.
+//
+// If an internal package has an error (that is, PackageOrErr is Err), it is excluded from
+// consideration. Internal packages that transitively import the error package
+// are also excluded. So, if:
+//
+// -> B/foo
+// /
+// A
+// \
+// -> A/bar -> B/baz
+//
+// And A/bar has some error in it, then both A and A/bar will be eliminated from
+// consideration; neither B/foo nor B/baz will be in the results. If A/bar, with
+// its errors, is ignored, however, then A will remain, and B/foo will be in the
+// results.
+//
+// Finally, note that if a directory is named "testdata", or has a leading dot
+// or underscore, it will not be directly analyzed as a source. This is in
+// keeping with Go tooling conventions that such directories should be ignored.
+// So, if:
+//
+// A -> B/foo
+// A/.bar -> B/baz
+// A/_qux -> B/baz
+// A/testdata -> B/baz
+//
+// Then B/foo will be returned, but B/baz will not, because all three of the
+// packages that import it are in directories with disallowed names.
+//
+// HOWEVER, in keeping with the Go compiler, if one of those packages in a
+// disallowed directory is imported by a package in an allowed directory, then
+// it *will* be used. That is, while tools like go list will ignore a directory
+// named .foo, you can still import from .foo. Thus, it must be included. So,
+// if:
+//
+// -> B/foo
+// /
+// A
+// \
+// -> A/.bar -> B/baz
+//
+// A is legal, and it imports A/.bar, so the results will include B/baz.
+func (t PackageTree) ListExternalImports(main, tests bool, ignore map[string]bool) []string {
+ // First, we need a reachmap
+ rm := t.ExternalReach(main, tests, ignore)
+
+ exm := make(map[string]struct{})
+ for pkg, reach := range rm {
+ // Eliminate import paths with any elements having leading dots, leading
+ // underscores, or testdata. If these are internally reachable (which is
+ // a no-no, but possible), any external imports will have already been
+ // pulled up through ExternalReach. The key here is that we don't want
+ // to treat such packages as themselves being sources.
+ //
+ // TODO(sdboyer) strings.Split will always heap alloc, which isn't great to do
+ // in a loop like this. We could also just parse it ourselves...
+ var skip bool
+ for _, elem := range strings.Split(pkg, "/") {
+ if strings.HasPrefix(elem, ".") || strings.HasPrefix(elem, "_") || elem == "testdata" {
+ skip = true
+ break
+ }
+ }
+
+ if !skip {
+ for _, ex := range reach {
+ exm[ex] = struct{}{}
+ }
+ }
+ }
+
+ if len(exm) == 0 {
+ return nil
+ }
+
+ ex := make([]string, len(exm))
+ k := 0
+ for p := range exm {
+ ex[k] = p
+ k++
+ }
+
+ sort.Strings(ex)
+ return ex
+}
+
+// checkPrefixSlash checks to see if the prefix is a prefix of the string as-is,
+// and that it is either equal OR the prefix + / is still a prefix.
+func checkPrefixSlash(s, prefix string) bool {
+ if !strings.HasPrefix(s, prefix) {
+ return false
+ }
+ return s == prefix || strings.HasPrefix(s, ensureTrailingSlash(prefix))
+}
diff --git a/vendor/github.com/sdboyer/vsolver/analysis_test.go b/vendor/github.com/sdboyer/gps/analysis_test.go
similarity index 67%
rename from vendor/github.com/sdboyer/vsolver/analysis_test.go
rename to vendor/github.com/sdboyer/gps/analysis_test.go
index 4abb537..210d036 100644
--- a/vendor/github.com/sdboyer/vsolver/analysis_test.go
+++ b/vendor/github.com/sdboyer/gps/analysis_test.go
@@ -1,6 +1,7 @@
-package vsolver
+package gps
import (
+ "fmt"
"go/build"
"os"
"path/filepath"
@@ -9,23 +10,21 @@
"testing"
)
-// externalReach() uses an easily separable algorithm, wmToReach(), to turn a
-// discovered set of packages and their imports into a proper external reach
-// map.
+// PackageTree.ExternalReach() uses an easily separable algorithm, wmToReach(),
+// to turn a discovered set of packages and their imports into a proper external
+// reach map.
//
// That algorithm is purely symbolic (no filesystem interaction), and thus is
// easy to test. This is that test.
func TestWorkmapToReach(t *testing.T) {
- empty := func() map[string]struct{} {
- return make(map[string]struct{})
+ empty := func() map[string]bool {
+ return make(map[string]bool)
}
table := map[string]struct {
- name string
workmap map[string]wm
basedir string
out map[string][]string
- err error
}{
"single": {
workmap: map[string]wm{
@@ -58,8 +57,8 @@
workmap: map[string]wm{
"foo": {
ex: empty(),
- in: map[string]struct{}{
- "foo/bar": struct{}{},
+ in: map[string]bool{
+ "foo/bar": true,
},
},
"foo/bar": {
@@ -76,13 +75,13 @@
workmap: map[string]wm{
"foo": {
ex: empty(),
- in: map[string]struct{}{
- "foo/bar": struct{}{},
+ in: map[string]bool{
+ "foo/bar": true,
},
},
"foo/bar": {
- ex: map[string]struct{}{
- "baz": struct{}{},
+ ex: map[string]bool{
+ "baz": true,
},
in: empty(),
},
@@ -96,23 +95,128 @@
},
},
},
+ "missing package is poison": {
+ workmap: map[string]wm{
+ "A": {
+ ex: map[string]bool{
+ "B/foo": true,
+ },
+ in: map[string]bool{
+ "A/foo": true, // missing
+ "A/bar": true,
+ },
+ },
+ "A/bar": {
+ ex: map[string]bool{
+ "B/baz": true,
+ },
+ in: empty(),
+ },
+ },
+ out: map[string][]string{
+ "A/bar": {
+ "B/baz",
+ },
+ },
+ },
+ "transitive missing package is poison": {
+ workmap: map[string]wm{
+ "A": {
+ ex: map[string]bool{
+ "B/foo": true,
+ },
+ in: map[string]bool{
+ "A/foo": true, // transitively missing
+ "A/quux": true,
+ },
+ },
+ "A/foo": {
+ ex: map[string]bool{
+ "C/flugle": true,
+ },
+ in: map[string]bool{
+ "A/bar": true, // missing
+ },
+ },
+ "A/quux": {
+ ex: map[string]bool{
+ "B/baz": true,
+ },
+ in: empty(),
+ },
+ },
+ out: map[string][]string{
+ "A/quux": {
+ "B/baz",
+ },
+ },
+ },
+ "err'd package is poison": {
+ workmap: map[string]wm{
+ "A": {
+ ex: map[string]bool{
+ "B/foo": true,
+ },
+ in: map[string]bool{
+ "A/foo": true, // err'd
+ "A/bar": true,
+ },
+ },
+ "A/foo": {
+ err: fmt.Errorf("err pkg"),
+ },
+ "A/bar": {
+ ex: map[string]bool{
+ "B/baz": true,
+ },
+ in: empty(),
+ },
+ },
+ out: map[string][]string{
+ "A/bar": {
+ "B/baz",
+ },
+ },
+ },
+ "transitive err'd package is poison": {
+ workmap: map[string]wm{
+ "A": {
+ ex: map[string]bool{
+ "B/foo": true,
+ },
+ in: map[string]bool{
+ "A/foo": true, // transitively err'd
+ "A/quux": true,
+ },
+ },
+ "A/foo": {
+ ex: map[string]bool{
+ "C/flugle": true,
+ },
+ in: map[string]bool{
+ "A/bar": true, // err'd
+ },
+ },
+ "A/bar": {
+ err: fmt.Errorf("err pkg"),
+ },
+ "A/quux": {
+ ex: map[string]bool{
+ "B/baz": true,
+ },
+ in: empty(),
+ },
+ },
+ out: map[string][]string{
+ "A/quux": {
+ "B/baz",
+ },
+ },
+ },
}
for name, fix := range table {
- out, err := wmToReach(fix.workmap, fix.basedir)
-
- if fix.out == nil {
- if err == nil {
- t.Errorf("wmToReach(%q): Error expected but not received", name)
- }
- continue
- }
-
- if err != nil {
- t.Errorf("wmToReach(%q): %v", name, err)
- continue
- }
-
+ out := wmToReach(fix.workmap, fix.basedir)
if !reflect.DeepEqual(out, fix.out) {
t.Errorf("wmToReach(%q): Did not get expected reach map:\n\t(GOT): %s\n\t(WNT): %s", name, out, fix.out)
}
@@ -137,7 +241,7 @@
out: PackageTree{
ImportRoot: "empty",
Packages: map[string]PackageOrErr{
- "empty": PackageOrErr{
+ "empty": {
Err: &build.NoGoError{
Dir: j("empty"),
},
@@ -152,13 +256,13 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"sort",
},
},
@@ -172,13 +276,13 @@
out: PackageTree{
ImportRoot: "arbitrary",
Packages: map[string]PackageOrErr{
- "arbitrary": PackageOrErr{
+ "arbitrary": {
P: Package{
ImportPath: "arbitrary",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"sort",
},
},
@@ -192,7 +296,7 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
@@ -213,7 +317,7 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
@@ -234,13 +338,13 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"sort",
},
TestImports: []string{
@@ -258,13 +362,13 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"sort",
},
TestImports: []string{
@@ -282,13 +386,13 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"sort",
},
TestImports: []string{
@@ -307,13 +411,13 @@
out: PackageTree{
ImportRoot: "m1p",
Packages: map[string]PackageOrErr{
- "m1p": PackageOrErr{
+ "m1p": {
P: Package{
ImportPath: "m1p",
CommentPath: "",
Name: "m1p",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"os",
"sort",
},
@@ -328,24 +432,24 @@
out: PackageTree{
ImportRoot: "nest",
Packages: map[string]PackageOrErr{
- "nest": PackageOrErr{
+ "nest": {
P: Package{
ImportPath: "nest",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"sort",
},
},
},
- "nest/m1p": PackageOrErr{
+ "nest/m1p": {
P: Package{
ImportPath: "nest/m1p",
CommentPath: "",
Name: "m1p",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"os",
"sort",
},
@@ -360,30 +464,116 @@
out: PackageTree{
ImportRoot: "ren",
Packages: map[string]PackageOrErr{
- "ren": PackageOrErr{
+ "ren": {
Err: &build.NoGoError{
Dir: j("ren"),
},
},
- "ren/m1p": PackageOrErr{
+ "ren/m1p": {
P: Package{
ImportPath: "ren/m1p",
CommentPath: "",
Name: "m1p",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"os",
"sort",
},
},
},
- "ren/simple": PackageOrErr{
+ "ren/simple": {
P: Package{
ImportPath: "ren/simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
+ "sort",
+ },
+ },
+ },
+ },
+ },
+ },
+ "internal name mismatch": {
+ fileRoot: j("doublenest"),
+ importRoot: "doublenest",
+ out: PackageTree{
+ ImportRoot: "doublenest",
+ Packages: map[string]PackageOrErr{
+ "doublenest": {
+ P: Package{
+ ImportPath: "doublenest",
+ CommentPath: "",
+ Name: "base",
+ Imports: []string{
+ "github.com/sdboyer/gps",
+ "go/parser",
+ },
+ },
+ },
+ "doublenest/namemismatch": {
+ P: Package{
+ ImportPath: "doublenest/namemismatch",
+ CommentPath: "",
+ Name: "nm",
+ Imports: []string{
+ "github.com/Masterminds/semver",
+ "os",
+ },
+ },
+ },
+ "doublenest/namemismatch/m1p": {
+ P: Package{
+ ImportPath: "doublenest/namemismatch/m1p",
+ CommentPath: "",
+ Name: "m1p",
+ Imports: []string{
+ "github.com/sdboyer/gps",
+ "os",
+ "sort",
+ },
+ },
+ },
+ },
+ },
+ },
+ "file and importroot mismatch": {
+ fileRoot: j("doublenest"),
+ importRoot: "other",
+ out: PackageTree{
+ ImportRoot: "other",
+ Packages: map[string]PackageOrErr{
+ "other": {
+ P: Package{
+ ImportPath: "other",
+ CommentPath: "",
+ Name: "base",
+ Imports: []string{
+ "github.com/sdboyer/gps",
+ "go/parser",
+ },
+ },
+ },
+ "other/namemismatch": {
+ P: Package{
+ ImportPath: "other/namemismatch",
+ CommentPath: "",
+ Name: "nm",
+ Imports: []string{
+ "github.com/Masterminds/semver",
+ "os",
+ },
+ },
+ },
+ "other/namemismatch/m1p": {
+ P: Package{
+ ImportPath: "other/namemismatch/m1p",
+ CommentPath: "",
+ Name: "m1p",
+ Imports: []string{
+ "github.com/sdboyer/gps",
+ "os",
"sort",
},
},
@@ -397,13 +587,34 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
+ "sort",
+ "unicode",
+ },
+ },
+ },
+ },
+ },
+ },
+ "code and ignored main with comment leader": {
+ fileRoot: j("igmainlong"),
+ importRoot: "simple",
+ out: PackageTree{
+ ImportRoot: "simple",
+ Packages: map[string]PackageOrErr{
+ "simple": {
+ P: Package{
+ ImportPath: "simple",
+ CommentPath: "",
+ Name: "simple",
+ Imports: []string{
+ "github.com/sdboyer/gps",
"sort",
"unicode",
},
@@ -418,13 +629,13 @@
out: PackageTree{
ImportRoot: "simple",
Packages: map[string]PackageOrErr{
- "simple": PackageOrErr{
+ "simple": {
P: Package{
ImportPath: "simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"sort",
"unicode",
},
@@ -443,7 +654,7 @@
out: PackageTree{
ImportRoot: "twopkgs",
Packages: map[string]PackageOrErr{
- "twopkgs": PackageOrErr{
+ "twopkgs": {
Err: &build.MultiplePackageError{
Dir: j("twopkgs"),
Packages: []string{"simple", "m1p"},
@@ -453,6 +664,86 @@
},
},
},
+ // imports a missing pkg
+ "missing import": {
+ fileRoot: j("missing"),
+ importRoot: "missing",
+ out: PackageTree{
+ ImportRoot: "missing",
+ Packages: map[string]PackageOrErr{
+ "missing": {
+ P: Package{
+ ImportPath: "missing",
+ CommentPath: "",
+ Name: "simple",
+ Imports: []string{
+ "github.com/sdboyer/gps",
+ "missing/missing",
+ "sort",
+ },
+ },
+ },
+ "missing/m1p": {
+ P: Package{
+ ImportPath: "missing/m1p",
+ CommentPath: "",
+ Name: "m1p",
+ Imports: []string{
+ "github.com/sdboyer/gps",
+ "os",
+ "sort",
+ },
+ },
+ },
+ },
+ },
+ },
+ // has disallowed dir names
+ "disallowed dirs": {
+ fileRoot: j("disallow"),
+ importRoot: "disallow",
+ out: PackageTree{
+ ImportRoot: "disallow",
+ Packages: map[string]PackageOrErr{
+ "disallow": {
+ P: Package{
+ ImportPath: "disallow",
+ CommentPath: "",
+ Name: "disallow",
+ Imports: []string{
+ "disallow/testdata",
+ "github.com/sdboyer/gps",
+ "sort",
+ },
+ },
+ },
+ // disallow/.m1p is ignored by listPackages...for now. Kept
+ // here commented because this might change again...
+ //"disallow/.m1p": {
+ //P: Package{
+ //ImportPath: "disallow/.m1p",
+ //CommentPath: "",
+ //Name: "m1p",
+ //Imports: []string{
+ //"github.com/sdboyer/gps",
+ //"os",
+ //"sort",
+ //},
+ //},
+ //},
+ "disallow/testdata": {
+ P: Package{
+ ImportPath: "disallow/testdata",
+ CommentPath: "",
+ Name: "testdata",
+ Imports: []string{
+ "hash",
+ },
+ },
+ },
+ },
+ },
+ },
// This case mostly exists for the PackageTree methods, but it does
// cover a bit of range
"varied": {
@@ -461,7 +752,7 @@
out: PackageTree{
ImportRoot: "varied",
Packages: map[string]PackageOrErr{
- "varied": PackageOrErr{
+ "varied": {
P: Package{
ImportPath: "varied",
CommentPath: "",
@@ -474,7 +765,7 @@
},
},
},
- "varied/otherpath": PackageOrErr{
+ "varied/otherpath": {
P: Package{
ImportPath: "varied/otherpath",
CommentPath: "",
@@ -485,19 +776,19 @@
},
},
},
- "varied/simple": PackageOrErr{
+ "varied/simple": {
P: Package{
ImportPath: "varied/simple",
CommentPath: "",
Name: "simple",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"go/parser",
"varied/simple/another",
},
},
},
- "varied/simple/another": PackageOrErr{
+ "varied/simple/another": {
P: Package{
ImportPath: "varied/simple/another",
CommentPath: "",
@@ -511,7 +802,7 @@
},
},
},
- "varied/namemismatch": PackageOrErr{
+ "varied/namemismatch": {
P: Package{
ImportPath: "varied/namemismatch",
CommentPath: "",
@@ -522,13 +813,13 @@
},
},
},
- "varied/m1p": PackageOrErr{
+ "varied/m1p": {
P: Package{
ImportPath: "varied/m1p",
CommentPath: "",
Name: "m1p",
Imports: []string{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"os",
"sort",
},
@@ -574,7 +865,7 @@
for path, perr := range fix.out.Packages {
seen[path] = true
if operr, exists := out.Packages[path]; !exists {
- t.Errorf("listPackages(%q): Expected PackageOrErr for path %s was missing from output:\n\t%s", path, perr)
+ t.Errorf("listPackages(%q): Expected PackageOrErr for path %s was missing from output:\n\t%s", name, path, perr)
} else {
if !reflect.DeepEqual(perr, operr) {
t.Errorf("listPackages(%q): PkgOrErr for path %s was not as expected:\n\t(GOT): %s\n\t(WNT): %s", name, path, operr, perr)
@@ -587,7 +878,7 @@
continue
}
- t.Errorf("listPackages(%q): Got PackageOrErr for path %s, but none was expected:\n\t%s", path, operr)
+ t.Errorf("listPackages(%q): Got PackageOrErr for path %s, but none was expected:\n\t%s", name, path, operr)
}
}
}
@@ -609,10 +900,7 @@
var main, tests bool
validate := func() {
- result, err := vptree.ListExternalImports(main, tests, ignore)
- if err != nil {
- t.Errorf("%q case returned err: %s", name, err)
- }
+ result := vptree.ListExternalImports(main, tests, ignore)
if !reflect.DeepEqual(expect, result) {
t.Errorf("Wrong imports in %q case:\n\t(GOT): %s\n\t(WNT): %s", name, result, expect)
}
@@ -621,7 +909,7 @@
all := []string{
"encoding/binary",
"github.com/Masterminds/semver",
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
"go/parser",
"hash",
"net/http",
@@ -697,8 +985,7 @@
ignore = map[string]bool{
"varied/simple": true,
}
- // we get github.com/sdboyer/vsolver from m1p, too, so it should still be
- // there
+ // we get github.com/sdboyer/gps from m1p, too, so it should still be there
except("go/parser")
validate()
@@ -727,24 +1014,36 @@
main, tests = true, true
- // ignore two that should knock out vsolver
+ // ignore two that should knock out gps
name = "ignore both importers"
ignore = map[string]bool{
"varied/simple": true,
"varied/m1p": true,
}
- except("sort", "github.com/sdboyer/vsolver", "go/parser")
+ except("sort", "github.com/sdboyer/gps", "go/parser")
validate()
// finally, directly ignore some external packages
name = "ignore external"
ignore = map[string]bool{
- "github.com/sdboyer/vsolver": true,
- "go/parser": true,
- "sort": true,
+ "github.com/sdboyer/gps": true,
+ "go/parser": true,
+ "sort": true,
}
- except("sort", "github.com/sdboyer/vsolver", "go/parser")
+ except("sort", "github.com/sdboyer/gps", "go/parser")
validate()
+
+ // The only thing varied *doesn't* cover is disallowed path patterns
+ ptree, err := listPackages(filepath.Join(getwd(t), "_testdata", "src", "disallow"), "disallow")
+ if err != nil {
+ t.Fatalf("listPackages failed on disallow test case: %s", err)
+ }
+
+ result := ptree.ListExternalImports(false, false, nil)
+ expect = []string{"github.com/sdboyer/gps", "hash", "sort"}
+ if !reflect.DeepEqual(expect, result) {
+ t.Errorf("Wrong imports in %q case:\n\t(GOT): %s\n\t(WNT): %s", name, result, expect)
+ }
}
func TestExternalReach(t *testing.T) {
@@ -761,10 +1060,7 @@
var ignore map[string]bool
validate := func() {
- result, err := vptree.ExternalReach(main, tests, ignore)
- if err != nil {
- t.Errorf("ver(%q): case returned err: %s", name, err)
- }
+ result := vptree.ExternalReach(main, tests, ignore)
if !reflect.DeepEqual(expect, result) {
seen := make(map[string]bool)
for ip, epkgs := range expect {
@@ -788,12 +1084,12 @@
}
all := map[string][]string{
- "varied": {"encoding/binary", "github.com/Masterminds/semver", "github.com/sdboyer/vsolver", "go/parser", "hash", "net/http", "os", "sort"},
- "varied/m1p": {"github.com/sdboyer/vsolver", "os", "sort"},
+ "varied": {"encoding/binary", "github.com/Masterminds/semver", "github.com/sdboyer/gps", "go/parser", "hash", "net/http", "os", "sort"},
+ "varied/m1p": {"github.com/sdboyer/gps", "os", "sort"},
"varied/namemismatch": {"github.com/Masterminds/semver", "os"},
- "varied/otherpath": {"github.com/sdboyer/vsolver", "os", "sort"},
- "varied/simple": {"encoding/binary", "github.com/sdboyer/vsolver", "go/parser", "hash", "os", "sort"},
- "varied/simple/another": {"encoding/binary", "github.com/sdboyer/vsolver", "hash", "os", "sort"},
+ "varied/otherpath": {"github.com/sdboyer/gps", "os", "sort"},
+ "varied/simple": {"encoding/binary", "github.com/sdboyer/gps", "go/parser", "hash", "os", "sort"},
+ "varied/simple/another": {"encoding/binary", "github.com/sdboyer/gps", "hash", "os", "sort"},
}
// build a map to validate the exception inputs. do this because shit is
// hard enough to keep track of that it's preferable not to have silent
@@ -891,7 +1187,7 @@
"varied encoding/binary",
"varied/simple encoding/binary",
"varied/simple/another encoding/binary",
- "varied/otherpath github.com/sdboyer/vsolver os sort",
+ "varied/otherpath github.com/sdboyer/gps os sort",
)
// almost the same as previous, but varied just goes away completely
@@ -901,7 +1197,7 @@
"varied",
"varied/simple encoding/binary",
"varied/simple/another encoding/binary",
- "varied/otherpath github.com/sdboyer/vsolver os sort",
+ "varied/otherpath github.com/sdboyer/gps os sort",
)
validate()
@@ -929,7 +1225,7 @@
}
except(
// root pkg loses on everything in varied/simple/another and varied/m1p
- "varied hash encoding/binary go/parser github.com/sdboyer/vsolver sort",
+ "varied hash encoding/binary go/parser github.com/sdboyer/gps sort",
"varied/otherpath",
"varied/simple",
)
@@ -940,22 +1236,21 @@
ignore["varied/namemismatch"] = true
except(
// root pkg loses on everything in varied/simple/another and varied/m1p
- "varied hash encoding/binary go/parser github.com/sdboyer/vsolver sort os github.com/Masterminds/semver",
+ "varied hash encoding/binary go/parser github.com/sdboyer/gps sort os github.com/Masterminds/semver",
"varied/otherpath",
"varied/simple",
"varied/namemismatch",
)
validate()
-
}
var _ = map[string][]string{
- "varied": {"encoding/binary", "github.com/Masterminds/semver", "github.com/sdboyer/vsolver", "go/parser", "hash", "net/http", "os", "sort"},
- "varied/m1p": {"github.com/sdboyer/vsolver", "os", "sort"},
+ "varied": {"encoding/binary", "github.com/Masterminds/semver", "github.com/sdboyer/gps", "go/parser", "hash", "net/http", "os", "sort"},
+ "varied/m1p": {"github.com/sdboyer/gps", "os", "sort"},
"varied/namemismatch": {"github.com/Masterminds/semver", "os"},
- "varied/otherpath": {"github.com/sdboyer/vsolver", "os", "sort"},
- "varied/simple": {"encoding/binary", "github.com/sdboyer/vsolver", "go/parser", "hash", "os", "sort"},
- "varied/simple/another": {"encoding/binary", "github.com/sdboyer/vsolver", "hash", "os", "sort"},
+ "varied/otherpath": {"github.com/sdboyer/gps", "os", "sort"},
+ "varied/simple": {"encoding/binary", "github.com/sdboyer/gps", "go/parser", "hash", "os", "sort"},
+ "varied/simple/another": {"encoding/binary", "github.com/sdboyer/gps", "hash", "os", "sort"},
}
func getwd(t *testing.T) string {
diff --git a/vendor/github.com/sdboyer/vsolver/appveyor.yml b/vendor/github.com/sdboyer/gps/appveyor.yml
similarity index 86%
rename from vendor/github.com/sdboyer/vsolver/appveyor.yml
rename to vendor/github.com/sdboyer/gps/appveyor.yml
index cbaa941..9bf23a3 100644
--- a/vendor/github.com/sdboyer/vsolver/appveyor.yml
+++ b/vendor/github.com/sdboyer/gps/appveyor.yml
@@ -1,6 +1,6 @@
version: build-{build}.{branch}
-clone_folder: C:\gopath\src\github.com\sdboyer\vsolver
+clone_folder: C:\gopath\src\github.com\sdboyer\gps
shallow_clone: true
environment:
diff --git a/vendor/github.com/sdboyer/vsolver/bridge.go b/vendor/github.com/sdboyer/gps/bridge.go
similarity index 88%
rename from vendor/github.com/sdboyer/vsolver/bridge.go
rename to vendor/github.com/sdboyer/gps/bridge.go
index 7f57f15..8b26e6b 100644
--- a/vendor/github.com/sdboyer/vsolver/bridge.go
+++ b/vendor/github.com/sdboyer/gps/bridge.go
@@ -1,8 +1,9 @@
-package vsolver
+package gps
import (
"fmt"
"os"
+ "path/filepath"
"sort"
)
@@ -11,6 +12,9 @@
type sourceBridge interface {
getProjectInfo(pa atom) (Manifest, Lock, error)
listVersions(id ProjectIdentifier) ([]Version, error)
+ listPackages(id ProjectIdentifier, v Version) (PackageTree, error)
+ computeRootReach() ([]string, error)
+ revisionPresentIn(id ProjectIdentifier, r Revision) (bool, error)
pairRevision(id ProjectIdentifier, r Revision) []Version
pairVersion(id ProjectIdentifier, v UnpairedVersion) PairedVersion
repoExists(id ProjectIdentifier) (bool, error)
@@ -18,9 +22,7 @@
matches(id ProjectIdentifier, c Constraint, v Version) bool
matchesAny(id ProjectIdentifier, c1, c2 Constraint) bool
intersect(id ProjectIdentifier, c1, c2 Constraint) Constraint
- listPackages(id ProjectIdentifier, v Version) (PackageTree, error)
- computeRootReach() ([]string, error)
- verifyRoot(path string) error
+ verifyRootDir(path string) error
deduceRemoteRepo(path string) (*remoteRepo, error)
}
@@ -28,7 +30,7 @@
// caching that's tailored to the requirements of a particular solve run.
//
// It also performs transformations between ProjectIdentifiers, which is what
-// the solver primarily deals in, and ProjectName, which is what the
+// the solver primarily deals in, and ProjectRoot, which is what the
// SourceManager primarily deals in. This separation is helpful because it keeps
// the complexities of deciding what a particular name "means" entirely within
// the solver, while the SourceManager can traffic exclusively in
@@ -41,17 +43,12 @@
// The underlying, adapted-to SourceManager
sm SourceManager
- // Direction to sort the version list. False indicates sorting for upgrades;
- // true for downgrades.
- sortdown bool
-
- // The name of the root project we're operating on. Used to redirect some
- // calls that would ordinarily go to the SourceManager to a root-specific
- // logical path, instead.
- name ProjectName
-
- // The path to the base directory of the root project.
- root string
+ // The solver which we're assisting.
+ //
+ // The link between solver and bridge is circular, which is typically a bit
+ // awkward, but the bridge needs access to so many of the input arguments
+ // held by the solver that it ends up being easier and saner to do this.
+ s *solver
// Simple, local cache of the root's PackageTree
crp *struct {
@@ -59,24 +56,34 @@
err error
}
- // A map of packages to ignore.
- ignore map[string]bool
-
// Map of project root name to their available version list. This cache is
// layered on top of the proper SourceManager's cache; the only difference
// is that this keeps the versions sorted in the direction required by the
// current solve run
- vlists map[ProjectName][]Version
+ vlists map[ProjectRoot][]Version
+}
+
+// Global factory func to create a bridge. This exists solely to allow tests to
+// override it with a custom bridge and sm.
+var mkBridge func(*solver, SourceManager) sourceBridge = func(s *solver, sm SourceManager) sourceBridge {
+ return &bridge{
+ sm: sm,
+ s: s,
+ vlists: make(map[ProjectRoot][]Version),
+ }
}
func (b *bridge) getProjectInfo(pa atom) (Manifest, Lock, error) {
- return b.sm.GetProjectInfo(ProjectName(pa.id.netName()), pa.v)
+ if pa.id.ProjectRoot == b.s.params.ImportRoot {
+ return b.s.rm, b.s.rl, nil
+ }
+ return b.sm.GetProjectInfo(ProjectRoot(pa.id.netName()), pa.v)
}
-func (b *bridge) key(id ProjectIdentifier) ProjectName {
- k := ProjectName(id.NetworkName)
+func (b *bridge) key(id ProjectIdentifier) ProjectRoot {
+ k := ProjectRoot(id.NetworkName)
if k == "" {
- k = id.LocalName
+ k = id.ProjectRoot
}
return k
@@ -90,12 +97,12 @@
}
vl, err := b.sm.ListVersions(k)
- // TODO cache errors, too?
+ // TODO(sdboyer) cache errors, too?
if err != nil {
return nil, err
}
- if b.sortdown {
+ if b.s.params.Downgrade {
sort.Sort(downgradeVersionSorter(vl))
} else {
sort.Sort(upgradeVersionSorter(vl))
@@ -105,14 +112,25 @@
return vl, nil
}
+func (b *bridge) revisionPresentIn(id ProjectIdentifier, r Revision) (bool, error) {
+ k := b.key(id)
+ return b.sm.RevisionPresentIn(k, r)
+}
+
func (b *bridge) repoExists(id ProjectIdentifier) (bool, error) {
k := b.key(id)
return b.sm.RepoExists(k)
}
func (b *bridge) vendorCodeExists(id ProjectIdentifier) (bool, error) {
- k := b.key(id)
- return b.sm.VendorCodeExists(k)
+ fi, err := os.Stat(filepath.Join(b.s.params.RootDir, "vendor", string(id.ProjectRoot)))
+ if err != nil {
+ return false, err
+ } else if fi.IsDir() {
+ return true, nil
+ }
+
+ return false, nil
}
func (b *bridge) pairVersion(id ProjectIdentifier, v UnpairedVersion) PairedVersion {
@@ -349,7 +367,7 @@
// potentially messy root project source location on disk. Together, this means
// that we can't ask the real SourceManager to do it.
func (b *bridge) computeRootReach() ([]string, error) {
- // TODO i now cannot remember the reasons why i thought being less stringent
+ // TODO(sdboyer) i now cannot remember the reasons why i thought being less stringent
// in the analysis was OK. so, for now, we just compute a bog-standard list
// of externally-touched packages, including mains and test.
ptree, err := b.listRootPackages()
@@ -357,12 +375,12 @@
return nil, err
}
- return ptree.ListExternalImports(true, true, b.ignore)
+ return ptree.ListExternalImports(true, true, b.s.ig), nil
}
func (b *bridge) listRootPackages() (PackageTree, error) {
if b.crp == nil {
- ptree, err := listPackages(b.root, string(b.name))
+ ptree, err := listPackages(b.s.params.RootDir, string(b.s.params.ImportRoot))
b.crp = &struct {
ptree PackageTree
@@ -385,7 +403,7 @@
// The root project is handled separately, as the source manager isn't
// responsible for that code.
func (b *bridge) listPackages(id ProjectIdentifier, v Version) (PackageTree, error) {
- if id.LocalName == b.name {
+ if id.ProjectRoot == b.s.params.ImportRoot {
return b.listRootPackages()
}
@@ -397,11 +415,11 @@
// verifyRoot ensures that the provided path to the project root is in good
// working condition. This check is made only once, at the beginning of a solve
// run.
-func (b *bridge) verifyRoot(path string) error {
+func (b *bridge) verifyRootDir(path string) error {
if fi, err := os.Stat(path); err != nil {
- return badOptsFailure(fmt.Sprintf("Could not read project root (%s): %s", path, err))
+ return badOptsFailure(fmt.Sprintf("could not read project root (%s): %s", path, err))
} else if !fi.IsDir() {
- return badOptsFailure(fmt.Sprintf("Project root (%s) is a file, not a directory.", path))
+ return badOptsFailure(fmt.Sprintf("project root (%s) is a file, not a directory", path))
}
return nil
diff --git a/vendor/github.com/sdboyer/vsolver/circle.yml b/vendor/github.com/sdboyer/gps/circle.yml
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/circle.yml
rename to vendor/github.com/sdboyer/gps/circle.yml
diff --git a/vendor/github.com/sdboyer/vsolver/constraint_test.go b/vendor/github.com/sdboyer/gps/constraint_test.go
similarity index 99%
rename from vendor/github.com/sdboyer/vsolver/constraint_test.go
rename to vendor/github.com/sdboyer/gps/constraint_test.go
index 8dc7bb6..3863e65 100644
--- a/vendor/github.com/sdboyer/vsolver/constraint_test.go
+++ b/vendor/github.com/sdboyer/gps/constraint_test.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"fmt"
@@ -89,7 +89,7 @@
}
// Now add same rev to different branches
- // TODO this might not actually be a good idea, when you consider the
+ // TODO(sdboyer) this might not actually be a good idea, when you consider the
// semantics of floating versions...matching on an underlying rev might be
// nice in the short term, but it's probably shit most of the time
v5 := v2.Is(Revision("snuffleupagus")).(versionPair)
@@ -586,7 +586,7 @@
v5 := v2.Is(fozzie).(versionPair)
v6 := v3.Is(fozzie).(versionPair)
- // TODO we can't use the same range as below b/c semver.rangeConstraint is
+ // TODO(sdboyer) we can't use the same range as below b/c semver.rangeConstraint is
// still an incomparable type
c1, err := NewSemverConstraint("=1.0.0")
if err != nil {
diff --git a/vendor/github.com/sdboyer/vsolver/constraints.go b/vendor/github.com/sdboyer/gps/constraints.go
similarity index 97%
rename from vendor/github.com/sdboyer/vsolver/constraints.go
rename to vendor/github.com/sdboyer/gps/constraints.go
index 3cfe5ee..43b8b09 100644
--- a/vendor/github.com/sdboyer/vsolver/constraints.go
+++ b/vendor/github.com/sdboyer/gps/constraints.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"fmt"
@@ -14,7 +14,7 @@
// A Constraint provides structured limitations on the versions that are
// admissible for a given project.
//
-// As with Version, it has a private method because the vsolver's internal
+// As with Version, it has a private method because the gps's internal
// implementation of the problem is complete, and the system relies on type
// magic to operate.
type Constraint interface {
diff --git a/vendor/github.com/sdboyer/vsolver/discovery.go b/vendor/github.com/sdboyer/gps/discovery.go
similarity index 99%
rename from vendor/github.com/sdboyer/vsolver/discovery.go
rename to vendor/github.com/sdboyer/gps/discovery.go
index 5543bee..8da4a66 100644
--- a/vendor/github.com/sdboyer/vsolver/discovery.go
+++ b/vendor/github.com/sdboyer/gps/discovery.go
@@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-package vsolver
+package gps
// This code is taken from cmd/go/discovery.go; it is the logic go get itself
// uses to interpret meta imports information.
diff --git a/vendor/github.com/sdboyer/vsolver/errors.go b/vendor/github.com/sdboyer/gps/errors.go
similarity index 85%
rename from vendor/github.com/sdboyer/vsolver/errors.go
rename to vendor/github.com/sdboyer/gps/errors.go
index 18f50fb..26c8413 100644
--- a/vendor/github.com/sdboyer/vsolver/errors.go
+++ b/vendor/github.com/sdboyer/gps/errors.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"bytes"
@@ -8,7 +8,7 @@
type errorLevel uint8
-// TODO consistent, sensible way of handling 'type' and 'severity' - or figure
+// TODO(sdboyer) consistent, sensible way of handling 'type' and 'severity' - or figure
// out that they're not orthogonal and collapse into just 'type'
const (
@@ -41,11 +41,11 @@
func (e *noVersionError) Error() string {
if len(e.fails) == 0 {
- return fmt.Sprintf("No versions found for project %q.", e.pn.LocalName)
+ return fmt.Sprintf("No versions found for project %q.", e.pn.ProjectRoot)
}
var buf bytes.Buffer
- fmt.Fprintf(&buf, "No versions of %s met constraints:", e.pn.LocalName)
+ fmt.Fprintf(&buf, "No versions of %s met constraints:", e.pn.ProjectRoot)
for _, f := range e.fails {
fmt.Fprintf(&buf, "\n\t%s: %s", f.v, f.f.Error())
}
@@ -59,7 +59,7 @@
}
var buf bytes.Buffer
- fmt.Fprintf(&buf, "No versions of %s met constraints:", e.pn.LocalName)
+ fmt.Fprintf(&buf, "No versions of %s met constraints:", e.pn.ProjectRoot)
for _, f := range e.fails {
if te, ok := f.f.(traceError); ok {
fmt.Fprintf(&buf, "\n %s: %s", f.v, te.traceString())
@@ -110,10 +110,10 @@
var buf bytes.Buffer
fmt.Fprintf(&buf, "constraint %s on %s disjoint with other dependers:\n", e.goal.dep.Constraint.String(), e.goal.dep.Ident.errString())
for _, f := range e.failsib {
- fmt.Fprintf(&buf, "%s from %s at %s (no overlap)\n", f.dep.Constraint.String(), f.depender.id.LocalName, f.depender.v)
+ fmt.Fprintf(&buf, "%s from %s at %s (no overlap)\n", f.dep.Constraint.String(), f.depender.id.ProjectRoot, f.depender.v)
}
for _, f := range e.nofailsib {
- fmt.Fprintf(&buf, "%s from %s at %s (some overlap)\n", f.dep.Constraint.String(), f.depender.id.LocalName, f.depender.v)
+ fmt.Fprintf(&buf, "%s from %s at %s (some overlap)\n", f.dep.Constraint.String(), f.depender.id.ProjectRoot, f.depender.v)
}
return buf.String()
@@ -134,7 +134,7 @@
func (e *constraintNotAllowedFailure) traceString() string {
str := "%s at %s depends on %s with %s, but that's already selected at %s"
- return fmt.Sprintf(str, e.goal.depender.id.LocalName, e.goal.depender.v, e.goal.dep.Ident.LocalName, e.goal.dep.Constraint, e.v)
+ return fmt.Sprintf(str, e.goal.depender.id.ProjectRoot, e.goal.depender.v, e.goal.dep.Ident.ProjectRoot, e.goal.dep.Constraint, e.v)
}
type versionNotAllowedFailure struct {
@@ -164,9 +164,9 @@
func (e *versionNotAllowedFailure) traceString() string {
var buf bytes.Buffer
- fmt.Fprintf(&buf, "%s at %s not allowed by constraint %s:\n", e.goal.id.LocalName, e.goal.v, e.c.String())
+ fmt.Fprintf(&buf, "%s at %s not allowed by constraint %s:\n", e.goal.id.ProjectRoot, e.goal.v, e.c.String())
for _, f := range e.failparent {
- fmt.Fprintf(&buf, " %s from %s at %s\n", f.dep.Constraint.String(), f.depender.id.LocalName, f.depender.v)
+ fmt.Fprintf(&buf, " %s from %s at %s\n", f.dep.Constraint.String(), f.depender.id.ProjectRoot, f.depender.v)
}
return buf.String()
@@ -188,7 +188,7 @@
}
type sourceMismatchFailure struct {
- shared ProjectName
+ shared ProjectRoot
sel []dependency
current, mismatch string
prob atom
@@ -197,7 +197,7 @@
func (e *sourceMismatchFailure) Error() string {
var cur []string
for _, c := range e.sel {
- cur = append(cur, string(c.depender.id.LocalName))
+ cur = append(cur, string(c.depender.id.ProjectRoot))
}
str := "Could not introduce %s at %s, as it depends on %s from %s, but %s is already marked as coming from %s by %s"
@@ -278,7 +278,7 @@
func (e *checkeeHasProblemPackagesFailure) traceString() string {
var buf bytes.Buffer
- fmt.Fprintf(&buf, "%s at %s has problem subpkg(s):\n", e.goal.id.LocalName, e.goal.v)
+ fmt.Fprintf(&buf, "%s at %s has problem subpkg(s):\n", e.goal.id.ProjectRoot, e.goal.v)
for pkg, errdep := range e.failpkg {
if errdep.err == nil {
fmt.Fprintf(&buf, "\t%s is missing; ", pkg)
@@ -375,3 +375,31 @@
return buf.String()
}
+
+// nonexistentRevisionFailure indicates that a revision constraint was specified
+// for a given project, but that that revision does not exist in the source
+// repository.
+type nonexistentRevisionFailure struct {
+ goal dependency
+ r Revision
+}
+
+func (e *nonexistentRevisionFailure) Error() string {
+ return fmt.Sprintf(
+ "Could not introduce %s at %s, as it requires %s at revision %s, but that revision does not exist",
+ e.goal.depender.id.errString(),
+ e.goal.depender.v,
+ e.goal.dep.Ident.errString(),
+ e.r,
+ )
+}
+
+func (e *nonexistentRevisionFailure) traceString() string {
+ return fmt.Sprintf(
+ "%s at %s wants missing rev %s of %s",
+ e.goal.depender.id.errString(),
+ e.goal.depender.v,
+ e.r,
+ e.goal.dep.Ident.errString(),
+ )
+}
diff --git a/vendor/github.com/sdboyer/gps/example.go b/vendor/github.com/sdboyer/gps/example.go
new file mode 100644
index 0000000..1a5a31a
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/example.go
@@ -0,0 +1,58 @@
+// +build ignore
+
+package main
+
+import (
+ "go/build"
+ "log"
+ "os"
+ "path/filepath"
+ "strings"
+
+ gps "github.com/sdboyer/gps"
+)
+
+// This is probably the simplest possible implementation of gps. It does the
+// substantive work that `go get` does, except:
+// 1. It drops the resulting tree into vendor instead of GOPATH
+// 2. It prefers semver tags (if available) over branches
+// 3. It removes any vendor directories nested within dependencies
+//
+// This will compile and work...and then blow away the vendor directory present
+// in the cwd, if any. Be careful!
+func main() {
+ // Operate on the current directory
+ root, _ := os.Getwd()
+ // Assume the current directory is correctly placed on a GOPATH, and derive
+ // the ProjectRoot from it
+ srcprefix := filepath.Join(build.Default.GOPATH, "src") + string(filepath.Separator)
+ importroot := filepath.ToSlash(strings.TrimPrefix(root, srcprefix))
+
+ // Set up params, including tracing
+ params := gps.SolveParameters{
+ RootDir: root,
+ ImportRoot: gps.ProjectRoot(importroot),
+ Trace: true,
+ TraceLogger: log.New(os.Stdout, "", 0),
+ }
+
+ // Set up a SourceManager with the NaiveAnalyzer
+ sourcemgr, _ := gps.NewSourceManager(NaiveAnalyzer{}, ".repocache", false)
+ defer sourcemgr.Release()
+
+ // Prep and run the solver
+ solver, _ := gps.Prepare(params, sourcemgr)
+ solution, err := solver.Solve()
+ if err == nil {
+ // If no failure, blow away the vendor dir and write a new one out,
+ // stripping nested vendor directories as we go.
+ os.RemoveAll(filepath.Join(root, "vendor"))
+ gps.CreateVendorTree(filepath.Join(root, "vendor"), solution, sourcemgr, true)
+ }
+}
+
+type NaiveAnalyzer struct{}
+
+func (a NaiveAnalyzer) GetInfo(path string, n gps.ProjectRoot) (gps.Manifest, gps.Lock, error) {
+ return nil, nil, nil
+}
diff --git a/vendor/github.com/sdboyer/vsolver/flags.go b/vendor/github.com/sdboyer/gps/flags.go
similarity index 98%
rename from vendor/github.com/sdboyer/vsolver/flags.go
rename to vendor/github.com/sdboyer/gps/flags.go
index 8a7880f..a7172c1 100644
--- a/vendor/github.com/sdboyer/vsolver/flags.go
+++ b/vendor/github.com/sdboyer/gps/flags.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
// projectExistence values represent the extent to which a project "exists."
type projectExistence uint8
diff --git a/vendor/github.com/sdboyer/vsolver/glide.lock b/vendor/github.com/sdboyer/gps/glide.lock
similarity index 100%
rename from vendor/github.com/sdboyer/vsolver/glide.lock
rename to vendor/github.com/sdboyer/gps/glide.lock
diff --git a/vendor/github.com/sdboyer/vsolver/glide.yaml b/vendor/github.com/sdboyer/gps/glide.yaml
similarity index 78%
rename from vendor/github.com/sdboyer/vsolver/glide.yaml
rename to vendor/github.com/sdboyer/gps/glide.yaml
index fed9822..690f9e1 100644
--- a/vendor/github.com/sdboyer/vsolver/glide.yaml
+++ b/vendor/github.com/sdboyer/gps/glide.yaml
@@ -1,4 +1,4 @@
-package: github.com/sdboyer/vsolver
+package: github.com/sdboyer/gps
owners:
- name: Sam Boyer
email: tech@samboyer.org
@@ -11,5 +11,4 @@
- package: github.com/termie/go-shutil
version: bcacb06fecaeec8dc42af03c87c6949f4a05c74c
vcs: git
-- package: github.com/hashicorp/go-immutable-radix
- package: github.com/armon/go-radix
diff --git a/vendor/github.com/sdboyer/vsolver/hash.go b/vendor/github.com/sdboyer/gps/hash.go
similarity index 76%
rename from vendor/github.com/sdboyer/vsolver/hash.go
rename to vendor/github.com/sdboyer/gps/hash.go
index 5fe87aa..9e27bcd 100644
--- a/vendor/github.com/sdboyer/vsolver/hash.go
+++ b/vendor/github.com/sdboyer/gps/hash.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"crypto/sha256"
@@ -19,18 +19,13 @@
func (s *solver) HashInputs() ([]byte, error) {
// Do these checks up front before any other work is needed, as they're the
// only things that can cause errors
- if err := s.b.verifyRoot(s.args.Root); err != nil {
- // This will already be a BadOptsFailure
- return nil, err
- }
-
// Pass in magic root values, and the bridge will analyze the right thing
- ptree, err := s.b.listPackages(ProjectIdentifier{LocalName: s.args.Name}, nil)
+ ptree, err := s.b.listPackages(ProjectIdentifier{ProjectRoot: s.params.ImportRoot}, nil)
if err != nil {
- return nil, badOptsFailure(fmt.Sprintf("Error while parsing imports under %s: %s", s.args.Root, err.Error()))
+ return nil, badOptsFailure(fmt.Sprintf("Error while parsing packages under %s: %s", s.params.RootDir, err.Error()))
}
- d, dd := s.args.Manifest.DependencyConstraints(), s.args.Manifest.TestDependencyConstraints()
+ d, dd := s.params.Manifest.DependencyConstraints(), s.params.Manifest.TestDependencyConstraints()
p := make(sortedDeps, len(d))
copy(p, d)
p = append(p, dd...)
@@ -40,7 +35,7 @@
// We have everything we need; now, compute the hash.
h := sha256.New()
for _, pd := range p {
- h.Write([]byte(pd.Ident.LocalName))
+ h.Write([]byte(pd.Ident.ProjectRoot))
h.Write([]byte(pd.Ident.NetworkName))
// FIXME Constraint.String() is a surjective-only transformation - tags
// and branches with the same name are written out as the same string.
@@ -49,10 +44,11 @@
h.Write([]byte(pd.Constraint.String()))
}
- // The stdlib packages play the same functional role in solving as ignores.
- // Because they change, albeit quite infrequently, we have to include them
- // in the hash.
+ // The stdlib and old appengine packages play the same functional role in
+ // solving as ignores. Because they change, albeit quite infrequently, we
+ // have to include them in the hash.
h.Write([]byte(stdlibPkgs))
+ h.Write([]byte(appenginePkgs))
// Write each of the packages, or the errors that were found for a
// particular subpath, into the hash.
@@ -88,12 +84,12 @@
}
}
- // TODO overrides
- // TODO aliases
+ // TODO(sdboyer) overrides
+ // TODO(sdboyer) aliases
return h.Sum(nil), nil
}
-type sortedDeps []ProjectDep
+type sortedDeps []ProjectConstraint
func (s sortedDeps) Len() int {
return len(s)
diff --git a/vendor/github.com/sdboyer/gps/hash_test.go b/vendor/github.com/sdboyer/gps/hash_test.go
new file mode 100644
index 0000000..dc27ddf
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/hash_test.go
@@ -0,0 +1,35 @@
+package gps
+
+import (
+ "bytes"
+ "crypto/sha256"
+ "testing"
+)
+
+func TestHashInputs(t *testing.T) {
+ fix := basicFixtures["shared dependency with overlapping constraints"]
+
+ params := SolveParameters{
+ RootDir: string(fix.ds[0].n),
+ ImportRoot: fix.ds[0].n,
+ Manifest: fix.ds[0],
+ Ignore: []string{"foo", "bar"},
+ }
+
+ s, err := Prepare(params, newdepspecSM(fix.ds, nil))
+
+ dig, err := s.HashInputs()
+ if err != nil {
+ t.Fatalf("HashInputs returned unexpected err: %s", err)
+ }
+
+ h := sha256.New()
+ for _, v := range []string{"a", "a", "1.0.0", "b", "b", "1.0.0", stdlibPkgs, appenginePkgs, "root", "", "root", "a", "b", "bar", "foo"} {
+ h.Write([]byte(v))
+ }
+ correct := h.Sum(nil)
+
+ if !bytes.Equal(dig, correct) {
+ t.Errorf("Hashes are not equal")
+ }
+}
diff --git a/vendor/github.com/sdboyer/vsolver/import_mode_go15.go b/vendor/github.com/sdboyer/gps/import_mode_go15.go
similarity index 94%
rename from vendor/github.com/sdboyer/vsolver/import_mode_go15.go
rename to vendor/github.com/sdboyer/gps/import_mode_go15.go
index 05ae43a..5ef11c2 100644
--- a/vendor/github.com/sdboyer/vsolver/import_mode_go15.go
+++ b/vendor/github.com/sdboyer/gps/import_mode_go15.go
@@ -1,6 +1,6 @@
// +build !go1.6
-package vsolver
+package gps
import "go/build"
diff --git a/vendor/github.com/sdboyer/vsolver/import_mode_go16.go b/vendor/github.com/sdboyer/gps/import_mode_go16.go
similarity index 93%
rename from vendor/github.com/sdboyer/vsolver/import_mode_go16.go
rename to vendor/github.com/sdboyer/gps/import_mode_go16.go
index 1b798ce..edb534a 100644
--- a/vendor/github.com/sdboyer/vsolver/import_mode_go16.go
+++ b/vendor/github.com/sdboyer/gps/import_mode_go16.go
@@ -1,6 +1,6 @@
// +build go1.6
-package vsolver
+package gps
import "go/build"
diff --git a/vendor/github.com/sdboyer/vsolver/lock.go b/vendor/github.com/sdboyer/gps/lock.go
similarity index 86%
rename from vendor/github.com/sdboyer/vsolver/lock.go
rename to vendor/github.com/sdboyer/gps/lock.go
index 19a75d3..1d4db56 100644
--- a/vendor/github.com/sdboyer/vsolver/lock.go
+++ b/vendor/github.com/sdboyer/gps/lock.go
@@ -1,17 +1,17 @@
-package vsolver
+package gps
// Lock represents data from a lock file (or however the implementing tool
// chooses to store it) at a particular version that is relevant to the
// satisfiability solving process.
//
-// In general, the information produced by vsolver on finding a successful
+// In general, the information produced by gps on finding a successful
// solution is all that would be necessary to constitute a lock file, though
// tools can include whatever other information they want in their storage.
type Lock interface {
// Indicates the version of the solver used to generate this lock data
//SolverVersion() string
- // The hash of inputs to vsolver that resulted in this lock data
+ // The hash of inputs to gps that resulted in this lock data
InputHash() []byte
// Projects returns the list of LockedProjects contained in the lock data.
@@ -26,7 +26,6 @@
pi ProjectIdentifier
v UnpairedVersion
r Revision
- path string
pkgs []string
}
@@ -49,8 +48,7 @@
}
// NewLockedProject creates a new LockedProject struct with a given name,
-// version, upstream repository URI, and on-disk path at which the project is to
-// be checked out under a vendor directory.
+// version, and upstream repository URL.
//
// Note that passing a nil version will cause a panic. This is a correctness
// measure to ensure that the solver is never exposed to a version-less lock
@@ -58,17 +56,16 @@
// to simply dismiss that project. By creating a hard failure case via panic
// instead, we are trying to avoid inflicting the resulting pain on the user by
// instead forcing a decision on the Analyzer implementation.
-func NewLockedProject(n ProjectName, v Version, uri, path string, pkgs []string) LockedProject {
+func NewLockedProject(n ProjectRoot, v Version, url string, pkgs []string) LockedProject {
if v == nil {
panic("must provide a non-nil version to create a LockedProject")
}
lp := LockedProject{
pi: ProjectIdentifier{
- LocalName: n,
- NetworkName: uri,
+ ProjectRoot: n,
+ NetworkName: url,
},
- path: path,
pkgs: pkgs,
}
@@ -110,12 +107,6 @@
return lp.v.Is(lp.r)
}
-// Path returns the path relative to the vendor directory to which the locked
-// project should be checked out.
-func (lp LockedProject) Path() string {
- return lp.path
-}
-
func (lp LockedProject) toAtom() atom {
pa := atom{
id: lp.Ident(),
diff --git a/vendor/github.com/sdboyer/vsolver/manager_test.go b/vendor/github.com/sdboyer/gps/manager_test.go
similarity index 87%
rename from vendor/github.com/sdboyer/vsolver/manager_test.go
rename to vendor/github.com/sdboyer/gps/manager_test.go
index 98e0e38..ebc8091 100644
--- a/vendor/github.com/sdboyer/vsolver/manager_test.go
+++ b/vendor/github.com/sdboyer/gps/manager_test.go
@@ -1,8 +1,7 @@
-package vsolver
+package gps
import (
"fmt"
- "go/build"
"io/ioutil"
"os"
"path"
@@ -15,10 +14,13 @@
var bd string
-type dummyAnalyzer struct{}
+// An analyzer that passes nothing back, but doesn't error. This is the naive
+// case - no constraints, no lock, and no errors. The SourceMgr will interpret
+// this as open/Any constraints on everything in the import graph.
+type naiveAnalyzer struct{}
-func (dummyAnalyzer) GetInfo(ctx build.Context, p ProjectName) (Manifest, Lock, error) {
- return SimpleManifest{N: p}, nil, nil
+func (naiveAnalyzer) GetInfo(string, ProjectRoot) (Manifest, Lock, error) {
+ return nil, nil, nil
}
func sv(s string) *semver.Version {
@@ -40,7 +42,7 @@
if err != nil {
t.Errorf("Failed to create temp dir: %s", err)
}
- _, err = NewSourceManager(dummyAnalyzer{}, cpath, bd, false)
+ _, err = NewSourceManager(naiveAnalyzer{}, cpath, false)
if err != nil {
t.Errorf("Unexpected error on SourceManager creation: %s", err)
@@ -52,12 +54,12 @@
}
}()
- _, err = NewSourceManager(dummyAnalyzer{}, cpath, bd, false)
+ _, err = NewSourceManager(naiveAnalyzer{}, cpath, false)
if err == nil {
t.Errorf("Creating second SourceManager should have failed due to file lock contention")
}
- sm, err := NewSourceManager(dummyAnalyzer{}, cpath, bd, true)
+ sm, err := NewSourceManager(naiveAnalyzer{}, cpath, true)
defer sm.Release()
if err != nil {
t.Errorf("Creating second SourceManager should have succeeded when force flag was passed, but failed with err %s", err)
@@ -78,7 +80,7 @@
if err != nil {
t.Errorf("Failed to create temp dir: %s", err)
}
- sm, err := NewSourceManager(dummyAnalyzer{}, cpath, bd, false)
+ sm, err := NewSourceManager(naiveAnalyzer{}, cpath, false)
if err != nil {
t.Errorf("Unexpected error on SourceManager creation: %s", err)
@@ -92,7 +94,7 @@
}()
defer sm.Release()
- pn := ProjectName("github.com/Masterminds/VCSTestRepo")
+ pn := ProjectRoot("github.com/Masterminds/VCSTestRepo")
v, err := sm.ListVersions(pn)
if err != nil {
t.Errorf("Unexpected error during initial project setup/fetching %s", err)
@@ -124,10 +126,11 @@
// ensure its sorting works, as well.
smc := &bridge{
sm: sm,
- vlists: make(map[ProjectName][]Version),
+ vlists: make(map[ProjectRoot][]Version),
+ s: &solver{},
}
- v, err = smc.listVersions(ProjectIdentifier{LocalName: pn})
+ v, err = smc.listVersions(ProjectIdentifier{ProjectRoot: pn})
if err != nil {
t.Errorf("Unexpected error during initial project setup/fetching %s", err)
}
@@ -157,7 +160,7 @@
_, err = os.Stat(path.Join(cpath, "metadata", "github.com", "Masterminds", "VCSTestRepo", "cache.json"))
if err != nil {
- // TODO temporarily disabled until we turn caching back on
+ // TODO(sdboyer) temporarily disabled until we turn caching back on
//t.Error("Metadata cache json file does not exist in expected location")
}
@@ -171,16 +174,8 @@
t.Error("Repo should exist after non-erroring call to ListVersions")
}
- exists, err = sm.VendorCodeExists(pn)
- if err != nil {
- t.Errorf("Error on checking VendorCodeExists: %s", err)
- }
- if exists {
- t.Error("Shouldn't be any vendor code after just calling ListVersions")
- }
-
// Now reach inside the black box
- pms, err := sm.(*sourceManager).getProjectManager(pn)
+ pms, err := sm.getProjectManager(pn)
if err != nil {
t.Errorf("Error on grabbing project manager obj: %s", err)
}
@@ -202,14 +197,13 @@
t.Errorf("Failed to create temp dir: %s", err)
}
- smi, err := NewSourceManager(dummyAnalyzer{}, cpath, bd, false)
+ sm, err := NewSourceManager(naiveAnalyzer{}, cpath, false)
if err != nil {
t.Errorf("Unexpected error on SourceManager creation: %s", err)
t.FailNow()
}
- sm := smi.(*sourceManager)
- upstreams := []ProjectName{
+ upstreams := []ProjectRoot{
"github.com/Masterminds/VCSTestRepo",
"bitbucket.org/mattfarina/testhgrepo",
"launchpad.net/govcstestbzrrepo",
@@ -314,7 +308,7 @@
if err != nil {
t.Errorf("Failed to create temp dir: %s", err)
}
- sm, err := NewSourceManager(dummyAnalyzer{}, cpath, bd, false)
+ sm, err := NewSourceManager(naiveAnalyzer{}, cpath, false)
if err != nil {
t.Errorf("Unexpected error on SourceManager creation: %s", err)
@@ -330,7 +324,7 @@
// setup done, now do the test
- pn := ProjectName("github.com/Masterminds/VCSTestRepo")
+ pn := ProjectRoot("github.com/Masterminds/VCSTestRepo")
_, _, err = sm.GetProjectInfo(pn, NewVersion("1.0.0"))
if err != nil {
diff --git a/vendor/github.com/sdboyer/gps/manifest.go b/vendor/github.com/sdboyer/gps/manifest.go
new file mode 100644
index 0000000..83fd9d7
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/manifest.go
@@ -0,0 +1,77 @@
+package gps
+
+// Manifest represents manifest-type data for a project at a particular version.
+// That means dependency constraints, both for normal dependencies and for
+// tests. The constraints expressed in a manifest determine the set of versions that
+// are acceptable to try for a given project.
+//
+// Expressing a constraint in a manifest does not guarantee that a particular
+// dependency will be present. It only guarantees that if packages in the
+// project specified by the dependency are discovered through static analysis of
+// the (transitive) import graph, then they will conform to the constraint.
+//
+// This does entail that manifests can express constraints on projects they do
+// not themselves import. This is by design, but its implications are complex.
+// See the gps docs for more information: https://github.com/sdboyer/gps/wiki
+type Manifest interface {
+ // Returns a list of project-level constraints.
+ DependencyConstraints() []ProjectConstraint
+ // Returns a list of constraints applicable to test imports. Note that this
+ // will only be consulted for root manifests.
+ TestDependencyConstraints() []ProjectConstraint
+}
+
+// SimpleManifest is a helper for tools to enumerate manifest data. It's
+// generally intended for ephemeral manifests, such as those Analyzers create on
+// the fly for projects with no manifest metadata, or metadata through a foreign
+// tool's idioms.
+type SimpleManifest struct {
+ Deps []ProjectConstraint
+ TestDeps []ProjectConstraint
+}
+
+var _ Manifest = SimpleManifest{}
+
+// DependencyConstraints returns the project's dependencies.
+func (m SimpleManifest) DependencyConstraints() []ProjectConstraint {
+ return m.Deps
+}
+
+// TestDependencyConstraints returns the project's test dependencies.
+func (m SimpleManifest) TestDependencyConstraints() []ProjectConstraint {
+ return m.TestDeps
+}
+
+// prepManifest ensures a manifest is prepared and safe for use by the solver.
+// This entails two things:
+//
+// * Ensuring that all ProjectIdentifiers are normalized (otherwise matching
+// can get screwy and the queues go out of alignment)
+// * Defensively ensuring that no outside routine can modify the manifest while
+// the solver is in-flight.
+//
+// This is achieved by copying the manifest's data into a new SimpleManifest.
+func prepManifest(m Manifest) Manifest {
+ if m == nil {
+ return SimpleManifest{}
+ }
+
+ deps := m.DependencyConstraints()
+ ddeps := m.TestDependencyConstraints()
+
+ rm := SimpleManifest{
+ Deps: make([]ProjectConstraint, len(deps)),
+ TestDeps: make([]ProjectConstraint, len(ddeps)),
+ }
+
+ for k, d := range deps {
+ d.Ident = d.Ident.normalize()
+ rm.Deps[k] = d
+ }
+ for k, d := range ddeps {
+ d.Ident = d.Ident.normalize()
+ rm.TestDeps[k] = d
+ }
+
+ return rm
+}
diff --git a/vendor/github.com/sdboyer/gps/marker-header.png b/vendor/github.com/sdboyer/gps/marker-header.png
new file mode 100644
index 0000000..66965c5
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/marker-header.png
Binary files differ
diff --git a/vendor/github.com/sdboyer/vsolver/project_manager.go b/vendor/github.com/sdboyer/gps/project_manager.go
similarity index 81%
rename from vendor/github.com/sdboyer/vsolver/project_manager.go
rename to vendor/github.com/sdboyer/gps/project_manager.go
index dd10e6a..e174fde 100644
--- a/vendor/github.com/sdboyer/vsolver/project_manager.go
+++ b/vendor/github.com/sdboyer/gps/project_manager.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"bytes"
@@ -18,14 +18,11 @@
type projectManager struct {
// The identifier of the project. At this level, corresponds to the
// '$GOPATH/src'-relative path, *and* the network name.
- n ProjectName
+ n ProjectRoot
// build.Context to use in any analysis, and to pass to the analyzer
ctx build.Context
- // Top-level project vendor dir
- vendordir string
-
// Object for the cache repository
crepo *repo
@@ -42,6 +39,7 @@
// The project metadata cache. This is persisted to disk, for reuse across
// solver runs.
+ // TODO(sdboyer) protect with mutex
dc *projectDataCache
}
@@ -53,12 +51,13 @@
f projectExistence
}
-// TODO figure out shape of versions, then implement marshaling/unmarshaling
+// TODO(sdboyer) figure out shape of versions, then implement marshaling/unmarshaling
type projectDataCache struct {
- Version string `json:"version"` // TODO use this
- Infos map[Revision]projectInfo `json:"infos"`
- VMap map[Version]Revision `json:"vmap"`
- RMap map[Revision][]Version `json:"rmap"`
+ Version string `json:"version"` // TODO(sdboyer) use this
+ Infos map[Revision]projectInfo `json:"infos"`
+ Packages map[Revision]PackageTree `json:"packages"`
+ VMap map[Version]Revision `json:"vmap"`
+ RMap map[Revision][]Version `json:"rmap"`
}
// projectInfo holds manifest and lock
@@ -110,13 +109,13 @@
}
pm.crepo.mut.Unlock()
if err != nil {
- // TODO More-er proper-er error
+ // TODO(sdboyer) More-er proper-er error
panic(fmt.Sprintf("canary - why is checkout/whatever failing: %s %s %s", pm.n, v.String(), err))
}
pm.crepo.mut.RLock()
- m, l, err := pm.an.GetInfo(pm.ctx, pm.n)
- // TODO cache results
+ m, l, err := pm.an.GetInfo(filepath.Join(pm.ctx.GOPATH, "src", string(pm.n)), pm.n)
+ // TODO(sdboyer) cache results
pm.crepo.mut.RUnlock()
if err == nil {
@@ -126,10 +125,12 @@
// If m is nil, prepManifest will provide an empty one.
pi := projectInfo{
- Manifest: prepManifest(m, pm.n),
+ Manifest: prepManifest(m),
Lock: l,
}
+ // TODO(sdboyer) this just clobbers all over and ignores the paired/unpaired
+ // distinction; serious fix is needed
if r, exists := pm.dc.VMap[v]; exists {
pm.dc.Infos[r] = pi
}
@@ -140,34 +141,62 @@
return nil, nil, err
}
-func (pm *projectManager) ListPackages(v Version) (PackageTree, error) {
- var err error
+func (pm *projectManager) ListPackages(v Version) (ptree PackageTree, err error) {
if err = pm.ensureCacheExistence(); err != nil {
- return PackageTree{}, err
+ return
}
+ // See if we can find it in the cache
+ var r Revision
+ switch v.(type) {
+ case Revision, PairedVersion:
+ var ok bool
+ if r, ok = v.(Revision); !ok {
+ r = v.(PairedVersion).Underlying()
+ }
+
+ if ptree, cached := pm.dc.Packages[r]; cached {
+ return ptree, nil
+ }
+ default:
+ var has bool
+ if r, has = pm.dc.VMap[v]; has {
+ if ptree, cached := pm.dc.Packages[r]; cached {
+ return ptree, nil
+ }
+ }
+ }
+
+ // TODO(sdboyer) handle the case where we have a version w/out rev, and not in cache
+
+ // Not in the cache; check out the version and do the analysis
pm.crepo.mut.Lock()
// Check out the desired version for analysis
- if pv, ok := v.(PairedVersion); ok {
+ if r != "" {
// Always prefer a rev, if it's available
- err = pm.crepo.r.UpdateVersion(pv.Underlying().String())
+ err = pm.crepo.r.UpdateVersion(string(r))
} else {
// If we don't have a rev, ensure the repo is up to date, otherwise we
// could have a desync issue
if !pm.crepo.synced {
err = pm.crepo.r.Update()
if err != nil {
- return PackageTree{}, fmt.Errorf("Could not fetch latest updates into repository")
+ return PackageTree{}, fmt.Errorf("Could not fetch latest updates into repository: %s", err)
}
pm.crepo.synced = true
}
err = pm.crepo.r.UpdateVersion(v.String())
}
- ex, err := listPackages(filepath.Join(pm.ctx.GOPATH, "src", string(pm.n)), string(pm.n))
+ ptree, err = listPackages(filepath.Join(pm.ctx.GOPATH, "src", string(pm.n)), string(pm.n))
pm.crepo.mut.Unlock()
- return ex, err
+ // TODO(sdboyer) cache errs?
+ if err != nil {
+ pm.dc.Packages[r] = ptree
+ }
+
+ return
}
func (pm *projectManager) ensureCacheExistence() error {
@@ -178,14 +207,17 @@
// don't have to think about it elsewhere
if !pm.CheckExistence(existsInCache) {
if pm.CheckExistence(existsUpstream) {
+ pm.crepo.mut.Lock()
err := pm.crepo.r.Get()
+ pm.crepo.mut.Unlock()
+
if err != nil {
- return fmt.Errorf("Failed to create repository cache for %s", pm.n)
+ return fmt.Errorf("failed to create repository cache for %s", pm.n)
}
pm.ex.s |= existsInCache
pm.ex.f |= existsInCache
} else {
- return fmt.Errorf("Project repository cache for %s does not exist", pm.n)
+ return fmt.Errorf("project %s does not exist upstream", pm.n)
}
}
@@ -202,7 +234,7 @@
pm.ex.f |= exbits
if err != nil {
- // TODO More-er proper-er error
+ // TODO(sdboyer) More-er proper-er error
fmt.Println(err)
return nil, err
}
@@ -214,7 +246,7 @@
}
// Process the version data into the cache
- // TODO detect out-of-sync data as we do this?
+ // TODO(sdboyer) detect out-of-sync data as we do this?
for k, v := range vpairs {
pm.dc.VMap[v] = v.Underlying()
pm.dc.RMap[v.Underlying()] = append(pm.dc.RMap[v.Underlying()], v)
@@ -223,9 +255,9 @@
} else {
vlist = make([]Version, len(pm.dc.VMap))
k := 0
- // TODO key type of VMap should be string; recombine here
+ // TODO(sdboyer) key type of VMap should be string; recombine here
//for v, r := range pm.dc.VMap {
- for v, _ := range pm.dc.VMap {
+ for v := range pm.dc.VMap {
vlist[k] = v
k++
}
@@ -234,6 +266,25 @@
return
}
+func (pm *projectManager) RevisionPresentIn(r Revision) (bool, error) {
+ // First and fastest path is to check the data cache to see if the rev is
+ // present. This could give us false positives, but the cases where that can
+ // occur would require a type of cache staleness that seems *exceedingly*
+ // unlikely to occur.
+ if _, has := pm.dc.Infos[r]; has {
+ return true, nil
+ } else if _, has := pm.dc.RMap[r]; has {
+ return true, nil
+ }
+
+ // For now at least, just run GetInfoAt(); it basically accomplishes the
+ // same thing.
+ if _, _, err := pm.GetInfoAt(r); err != nil {
+ return false, err
+ }
+ return true, nil
+}
+
// CheckExistence provides a direct method for querying existence levels of the
// project. It will only perform actual searching (local fs or over the network)
// if no previous attempt at that search has been made.
@@ -244,12 +295,7 @@
func (pm *projectManager) CheckExistence(ex projectExistence) bool {
if pm.ex.s&ex != ex {
if ex&existsInVendorRoot != 0 && pm.ex.s&existsInVendorRoot == 0 {
- pm.ex.s |= existsInVendorRoot
-
- fi, err := os.Stat(path.Join(pm.vendordir, string(pm.n)))
- if err == nil && fi.IsDir() {
- pm.ex.f |= existsInVendorRoot
- }
+ panic("should now be implemented in bridge")
}
if ex&existsInCache != 0 && pm.ex.s&existsInCache == 0 {
pm.crepo.mut.RLock()
@@ -290,7 +336,7 @@
all := bytes.Split(bytes.TrimSpace(out), []byte("\n"))
if err != nil || len(all) == 0 {
- // TODO remove this path? it really just complicates things, for
+ // TODO(sdboyer) remove this path? it really just complicates things, for
// probably not much benefit
// ls-remote failed, probably due to bad communication or a faulty
@@ -436,8 +482,8 @@
vlist = append(vlist, v)
}
case *vcs.SvnRepo:
- // TODO is it ok to return empty vlist and no error?
- // TODO ...gotta do something for svn, right?
+ // TODO(sdboyer) is it ok to return empty vlist and no error?
+ // TODO(sdboyer) ...gotta do something for svn, right?
default:
panic("unknown repo type")
}
@@ -458,7 +504,7 @@
return err
}
- // TODO could have an err here
+ // TODO(sdboyer) could have an err here
defer os.Rename(bak, idx)
vstr := v.String()
@@ -482,7 +528,7 @@
_, err = r.r.RunFromDir("git", "checkout-index", "-a", "--prefix="+to)
return err
default:
- // TODO This is a dumb, slow approach, but we're punting on making these
+ // TODO(sdboyer) This is a dumb, slow approach, but we're punting on making these
// fast for now because git is the OVERWHELMING case
r.r.UpdateVersion(v.String())
diff --git a/vendor/github.com/sdboyer/vsolver/remote.go b/vendor/github.com/sdboyer/gps/remote.go
similarity index 95%
rename from vendor/github.com/sdboyer/vsolver/remote.go
rename to vendor/github.com/sdboyer/gps/remote.go
index b04b9ce..c808d9a 100644
--- a/vendor/github.com/sdboyer/vsolver/remote.go
+++ b/vendor/github.com/sdboyer/gps/remote.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"fmt"
@@ -27,7 +27,7 @@
//err error
//}
-// TODO sync access to this map
+// TODO(sdboyer) sync access to this map
//var remoteCache = make(map[string]remoteResult)
// Regexes for the different known import path flavors
@@ -69,7 +69,7 @@
User: url.User(m[1]),
Host: m[2],
Path: "/" + m[3],
- // TODO This is what stdlib sets; grok why better
+ // TODO(sdboyer) This is what stdlib sets; grok why better
//RawPath: m[3],
}
} else {
@@ -93,7 +93,7 @@
rr.Schemes = []string{rr.CloneURL.Scheme}
}
- // TODO instead of a switch, encode base domain in radix tree and pick
+ // TODO(sdboyer) instead of a switch, encode base domain in radix tree and pick
// detector from there; if failure, then fall back on metadata work
switch {
@@ -156,7 +156,7 @@
//return
case lpRegex.MatchString(path):
- // TODO lp handling is nasty - there's ambiguities which can only really
+ // TODO(sdboyer) lp handling is nasty - there's ambiguities which can only really
// be resolved with a metadata request. See https://github.com/golang/go/issues/11436
v := lpRegex.FindStringSubmatch(path)
@@ -169,7 +169,7 @@
return
case glpRegex.MatchString(path):
- // TODO same ambiguity issues as with normal bzr lp
+ // TODO(sdboyer) same ambiguity issues as with normal bzr lp
v := glpRegex.FindStringSubmatch(path)
rr.CloneURL.Host = "git.launchpad.net"
@@ -208,7 +208,7 @@
switch v[5] {
case "git", "hg", "bzr":
x := strings.SplitN(v[1], "/", 2)
- // TODO is this actually correct for bzr?
+ // TODO(sdboyer) is this actually correct for bzr?
rr.CloneURL.Host = x[0]
rr.CloneURL.Path = x[1]
rr.VCS = []string{v[5]}
diff --git a/vendor/github.com/sdboyer/vsolver/remote_test.go b/vendor/github.com/sdboyer/gps/remote_test.go
similarity index 88%
rename from vendor/github.com/sdboyer/vsolver/remote_test.go
rename to vendor/github.com/sdboyer/gps/remote_test.go
index 3bac9ae..17de00f 100644
--- a/vendor/github.com/sdboyer/vsolver/remote_test.go
+++ b/vendor/github.com/sdboyer/gps/remote_test.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"fmt"
@@ -17,69 +17,69 @@
want *remoteRepo
}{
{
- "github.com/sdboyer/vsolver",
+ "github.com/sdboyer/gps",
&remoteRepo{
- Base: "github.com/sdboyer/vsolver",
+ Base: "github.com/sdboyer/gps",
RelPkg: "",
CloneURL: &url.URL{
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
Schemes: nil,
VCS: []string{"git"},
},
},
{
- "github.com/sdboyer/vsolver/foo",
+ "github.com/sdboyer/gps/foo",
&remoteRepo{
- Base: "github.com/sdboyer/vsolver",
+ Base: "github.com/sdboyer/gps",
RelPkg: "foo",
CloneURL: &url.URL{
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
Schemes: nil,
VCS: []string{"git"},
},
},
{
- "git@github.com:sdboyer/vsolver",
+ "git@github.com:sdboyer/gps",
&remoteRepo{
- Base: "github.com/sdboyer/vsolver",
+ Base: "github.com/sdboyer/gps",
RelPkg: "",
CloneURL: &url.URL{
Scheme: "ssh",
User: url.User("git"),
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
Schemes: []string{"ssh"},
VCS: []string{"git"},
},
},
{
- "https://github.com/sdboyer/vsolver/foo",
+ "https://github.com/sdboyer/gps/foo",
&remoteRepo{
- Base: "github.com/sdboyer/vsolver",
+ Base: "github.com/sdboyer/gps",
RelPkg: "foo",
CloneURL: &url.URL{
Scheme: "https",
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
Schemes: []string{"https"},
VCS: []string{"git"},
},
},
{
- "https://github.com/sdboyer/vsolver/foo/bar",
+ "https://github.com/sdboyer/gps/foo/bar",
&remoteRepo{
- Base: "github.com/sdboyer/vsolver",
+ Base: "github.com/sdboyer/gps",
RelPkg: "foo/bar",
CloneURL: &url.URL{
Scheme: "https",
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
Schemes: []string{"https"},
VCS: []string{"git"},
@@ -87,53 +87,53 @@
},
// some invalid github username patterns
{
- "github.com/-sdboyer/vsolver/foo",
+ "github.com/-sdboyer/gps/foo",
nil,
},
{
- "github.com/sdboyer-/vsolver/foo",
+ "github.com/sdboyer-/gps/foo",
nil,
},
{
- "github.com/sdbo.yer/vsolver/foo",
+ "github.com/sdbo.yer/gps/foo",
nil,
},
{
- "github.com/sdbo_yer/vsolver/foo",
+ "github.com/sdbo_yer/gps/foo",
nil,
},
{
- "gopkg.in/sdboyer/vsolver.v0",
+ "gopkg.in/sdboyer/gps.v0",
&remoteRepo{
- Base: "gopkg.in/sdboyer/vsolver.v0",
+ Base: "gopkg.in/sdboyer/gps.v0",
RelPkg: "",
CloneURL: &url.URL{
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
VCS: []string{"git"},
},
},
{
- "gopkg.in/sdboyer/vsolver.v0/foo",
+ "gopkg.in/sdboyer/gps.v0/foo",
&remoteRepo{
- Base: "gopkg.in/sdboyer/vsolver.v0",
+ Base: "gopkg.in/sdboyer/gps.v0",
RelPkg: "foo",
CloneURL: &url.URL{
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
VCS: []string{"git"},
},
},
{
- "gopkg.in/sdboyer/vsolver.v0/foo/bar",
+ "gopkg.in/sdboyer/gps.v0/foo/bar",
&remoteRepo{
- Base: "gopkg.in/sdboyer/vsolver.v0",
+ Base: "gopkg.in/sdboyer/gps.v0",
RelPkg: "foo/bar",
CloneURL: &url.URL{
Host: "github.com",
- Path: "sdboyer/vsolver",
+ Path: "sdboyer/gps",
},
VCS: []string{"git"},
},
@@ -451,7 +451,7 @@
t.Errorf("deduceRemoteRepo(%q): RelPkg was %s, wanted %s", fix.path, got.RelPkg, want.RelPkg)
}
if !reflect.DeepEqual(got.CloneURL, want.CloneURL) {
- // mispelling things is cool when it makes columns line up
+ // misspelling things is cool when it makes columns line up
t.Errorf("deduceRemoteRepo(%q): CloneURL disagreement:\n(GOT) %s\n(WNT) %s", fix.path, ufmt(got.CloneURL), ufmt(want.CloneURL))
}
if !reflect.DeepEqual(got.VCS, want.VCS) {
diff --git a/vendor/github.com/sdboyer/vsolver/remove_go16.go b/vendor/github.com/sdboyer/gps/remove_go16.go
similarity index 97%
rename from vendor/github.com/sdboyer/vsolver/remove_go16.go
rename to vendor/github.com/sdboyer/gps/remove_go16.go
index 21a3530..8c7844d 100644
--- a/vendor/github.com/sdboyer/vsolver/remove_go16.go
+++ b/vendor/github.com/sdboyer/gps/remove_go16.go
@@ -1,6 +1,6 @@
// +build !go1.7
-package vsolver
+package gps
import (
"os"
diff --git a/vendor/github.com/sdboyer/vsolver/remove_go17.go b/vendor/github.com/sdboyer/gps/remove_go17.go
similarity index 92%
rename from vendor/github.com/sdboyer/vsolver/remove_go17.go
rename to vendor/github.com/sdboyer/gps/remove_go17.go
index cb18bae..59c19a6 100644
--- a/vendor/github.com/sdboyer/vsolver/remove_go17.go
+++ b/vendor/github.com/sdboyer/gps/remove_go17.go
@@ -1,6 +1,6 @@
// +build go1.7
-package vsolver
+package gps
import "os"
diff --git a/vendor/github.com/sdboyer/vsolver/result.go b/vendor/github.com/sdboyer/gps/result.go
similarity index 66%
rename from vendor/github.com/sdboyer/vsolver/result.go
rename to vendor/github.com/sdboyer/gps/result.go
index e6e929e..e601de9 100644
--- a/vendor/github.com/sdboyer/vsolver/result.go
+++ b/vendor/github.com/sdboyer/gps/result.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"fmt"
@@ -7,14 +7,14 @@
"path/filepath"
)
-// A Result is returned by a solver run. It is mostly just a Lock, with some
+// A Solution is returned by a solver run. It is mostly just a Lock, with some
// additional methods that report information about the solve run.
-type Result interface {
+type Solution interface {
Lock
Attempts() int
}
-type result struct {
+type solution struct {
// A list of the projects selected by the solver.
p []LockedProject
@@ -37,37 +37,37 @@
return err
}
- // TODO parallelize
+ // TODO(sdboyer) parallelize
for _, p := range l.Projects() {
- to := path.Join(basedir, string(p.Ident().LocalName))
+ to := path.Join(basedir, string(p.Ident().ProjectRoot))
err := os.MkdirAll(to, 0777)
if err != nil {
return err
}
- err = sm.ExportProject(p.Ident().LocalName, p.Version(), to)
+ err = sm.ExportProject(p.Ident().ProjectRoot, p.Version(), to)
if err != nil {
removeAll(basedir)
- return fmt.Errorf("Error while exporting %s: %s", p.Ident().LocalName, err)
+ return fmt.Errorf("Error while exporting %s: %s", p.Ident().ProjectRoot, err)
}
if sv {
filepath.Walk(to, stripVendor)
}
- // TODO dump version metadata file
+ // TODO(sdboyer) dump version metadata file
}
return nil
}
-func (r result) Projects() []LockedProject {
+func (r solution) Projects() []LockedProject {
return r.p
}
-func (r result) Attempts() int {
+func (r solution) Attempts() int {
return r.att
}
-func (r result) InputHash() []byte {
+func (r solution) InputHash() []byte {
return r.hd
}
diff --git a/vendor/github.com/sdboyer/vsolver/result_test.go b/vendor/github.com/sdboyer/gps/result_test.go
similarity index 73%
rename from vendor/github.com/sdboyer/vsolver/result_test.go
rename to vendor/github.com/sdboyer/gps/result_test.go
index 5419d32..1aed83b 100644
--- a/vendor/github.com/sdboyer/vsolver/result_test.go
+++ b/vendor/github.com/sdboyer/gps/result_test.go
@@ -1,32 +1,22 @@
-package vsolver
+package gps
import (
- "go/build"
"os"
"path"
"testing"
)
-var basicResult result
+var basicResult solution
var kub atom
-// An analyzer that passes nothing back, but doesn't error. This expressly
-// creates a situation that shouldn't be able to happen from a general solver
-// perspective, so it's only useful for particular situations in tests
-type passthruAnalyzer struct{}
-
-func (passthruAnalyzer) GetInfo(ctx build.Context, p ProjectName) (Manifest, Lock, error) {
- return nil, nil, nil
-}
-
func pi(n string) ProjectIdentifier {
return ProjectIdentifier{
- LocalName: ProjectName(n),
+ ProjectRoot: ProjectRoot(n),
}
}
func init() {
- basicResult = result{
+ basicResult = solution{
att: 1,
p: []LockedProject{
pa2lp(atom{
@@ -58,7 +48,7 @@
tmp := path.Join(os.TempDir(), "vsolvtest")
os.RemoveAll(tmp)
- sm, err := NewSourceManager(passthruAnalyzer{}, path.Join(tmp, "cache"), path.Join(tmp, "base"), false)
+ sm, err := NewSourceManager(naiveAnalyzer{}, path.Join(tmp, "cache"), false)
if err != nil {
t.Errorf("NewSourceManager errored unexpectedly: %q", err)
}
@@ -68,7 +58,7 @@
t.Errorf("Unexpected error while creating vendor tree: %s", err)
}
- // TODO add more checks
+ // TODO(sdboyer) add more checks
}
func BenchmarkCreateVendorTree(b *testing.B) {
@@ -79,7 +69,7 @@
tmp := path.Join(os.TempDir(), "vsolvtest")
clean := true
- sm, err := NewSourceManager(passthruAnalyzer{}, path.Join(tmp, "cache"), path.Join(tmp, "base"), true)
+ sm, err := NewSourceManager(naiveAnalyzer{}, path.Join(tmp, "cache"), true)
if err != nil {
b.Errorf("NewSourceManager errored unexpectedly: %q", err)
clean = false
@@ -87,7 +77,7 @@
// Prefetch the projects before timer starts
for _, lp := range r.p {
- _, _, err := sm.GetProjectInfo(lp.Ident().LocalName, lp.Version())
+ _, _, err := sm.GetProjectInfo(lp.Ident().ProjectRoot, lp.Version())
if err != nil {
b.Errorf("failed getting project info during prefetch: %s", err)
clean = false
diff --git a/vendor/github.com/sdboyer/vsolver/satisfy.go b/vendor/github.com/sdboyer/gps/satisfy.go
similarity index 76%
rename from vendor/github.com/sdboyer/vsolver/satisfy.go
rename to vendor/github.com/sdboyer/gps/satisfy.go
index c431cdc..8c99f47 100644
--- a/vendor/github.com/sdboyer/vsolver/satisfy.go
+++ b/vendor/github.com/sdboyer/gps/satisfy.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
// checkProject performs all constraint checks on a new project (with packages)
// that we want to select. It determines if selecting the atom would result in
@@ -12,41 +12,54 @@
}
if err := s.checkAtomAllowable(pa); err != nil {
+ s.logSolve(err)
return err
}
if err := s.checkRequiredPackagesExist(a); err != nil {
+ s.logSolve(err)
return err
}
deps, err := s.getImportsAndConstraintsOf(a)
if err != nil {
// An err here would be from the package fetcher; pass it straight back
+ // TODO(sdboyer) can we logSolve this?
return err
}
for _, dep := range deps {
if err := s.checkIdentMatches(a, dep); err != nil {
+ s.logSolve(err)
return err
}
if err := s.checkDepsConstraintsAllowable(a, dep); err != nil {
+ s.logSolve(err)
return err
}
if err := s.checkDepsDisallowsSelected(a, dep); err != nil {
+ s.logSolve(err)
return err
}
+ // TODO(sdboyer) decide how to refactor in order to re-enable this. Checking for
+ // revision existence is important...but kinda obnoxious.
+ //if err := s.checkRevisionExists(a, dep); err != nil {
+ //s.logSolve(err)
+ //return err
+ //}
if err := s.checkPackageImportsFromDepExist(a, dep); err != nil {
+ s.logSolve(err)
return err
}
- // TODO add check that fails if adding this atom would create a loop
+ // TODO(sdboyer) add check that fails if adding this atom would create a loop
}
return nil
}
-// checkPackages performs all constraint checks new packages being added to an
-// already-selected project. It determines if selecting the packages would
+// checkPackages performs all constraint checks for new packages being added to
+// an already-selected project. It determines if selecting the packages would
// result in a state where all solver requirements are still satisfied.
func (s *solver) checkPackage(a atomWithPackages) error {
if nilpa == a.a {
@@ -60,20 +73,31 @@
deps, err := s.getImportsAndConstraintsOf(a)
if err != nil {
// An err here would be from the package fetcher; pass it straight back
+ // TODO(sdboyer) can we logSolve this?
return err
}
for _, dep := range deps {
if err := s.checkIdentMatches(a, dep); err != nil {
+ s.logSolve(err)
return err
}
if err := s.checkDepsConstraintsAllowable(a, dep); err != nil {
+ s.logSolve(err)
return err
}
if err := s.checkDepsDisallowsSelected(a, dep); err != nil {
+ s.logSolve(err)
return err
}
+ // TODO(sdboyer) decide how to refactor in order to re-enable this. Checking for
+ // revision existence is important...but kinda obnoxious.
+ //if err := s.checkRevisionExists(a, dep); err != nil {
+ //s.logSolve(err)
+ //return err
+ //}
if err := s.checkPackageImportsFromDepExist(a, dep); err != nil {
+ s.logSolve(err)
return err
}
}
@@ -88,7 +112,7 @@
if s.b.matches(pa.id, constraint, pa.v) {
return nil
}
- // TODO collect constraint failure reason (wait...aren't we, below?)
+ // TODO(sdboyer) collect constraint failure reason (wait...aren't we, below?)
deps := s.sel.getDependenciesOn(pa.id)
var failparent []dependency
@@ -105,7 +129,6 @@
c: constraint,
}
- s.logSolve(err)
return err
}
@@ -114,7 +137,7 @@
func (s *solver) checkRequiredPackagesExist(a atomWithPackages) error {
ptree, err := s.b.listPackages(a.a.id, a.a.v)
if err != nil {
- // TODO handle this more gracefully
+ // TODO(sdboyer) handle this more gracefully
return err
}
@@ -122,7 +145,7 @@
fp := make(map[string]errDeppers)
// We inspect these in a bit of a roundabout way, in order to incrementally
// build up the failure we'd return if there is, indeed, a missing package.
- // TODO rechecking all of these every time is wasteful. Is there a shortcut?
+ // TODO(sdboyer) rechecking all of these every time is wasteful. Is there a shortcut?
for _, dep := range deps {
for _, pkg := range dep.dep.pl {
if errdep, seen := fp[pkg]; seen {
@@ -141,12 +164,10 @@
}
if len(fp) > 0 {
- e := &checkeeHasProblemPackagesFailure{
+ return &checkeeHasProblemPackagesFailure{
goal: a.a,
failpkg: fp,
}
- s.logSolve(e)
- return e
}
return nil
}
@@ -154,7 +175,7 @@
// checkDepsConstraintsAllowable checks that the constraints of an atom on a
// given dep are valid with respect to existing constraints.
func (s *solver) checkDepsConstraintsAllowable(a atomWithPackages, cdep completeDep) error {
- dep := cdep.ProjectDep
+ dep := cdep.ProjectConstraint
constraint := s.sel.getConstraint(dep.Ident)
// Ensure the constraint expressed by the dep has at least some possible
// intersection with the intersection of existing constraints.
@@ -175,31 +196,27 @@
}
}
- err := &disjointConstraintFailure{
+ return &disjointConstraintFailure{
goal: dependency{depender: a.a, dep: cdep},
failsib: failsib,
nofailsib: nofailsib,
c: constraint,
}
- s.logSolve(err)
- return err
}
// checkDepsDisallowsSelected ensures that an atom's constraints on a particular
// dep are not incompatible with the version of that dep that's already been
// selected.
func (s *solver) checkDepsDisallowsSelected(a atomWithPackages, cdep completeDep) error {
- dep := cdep.ProjectDep
+ dep := cdep.ProjectConstraint
selected, exists := s.sel.selected(dep.Ident)
if exists && !s.b.matches(dep.Ident, dep.Constraint, selected.a.v) {
s.fail(dep.Ident)
- err := &constraintNotAllowedFailure{
+ return &constraintNotAllowedFailure{
goal: dependency{depender: a.a, dep: cdep},
v: selected.a.v,
}
- s.logSolve(err)
- return err
}
return nil
}
@@ -212,8 +229,8 @@
// identifiers with the same local name, but that disagree about where their
// network source is.
func (s *solver) checkIdentMatches(a atomWithPackages, cdep completeDep) error {
- dep := cdep.ProjectDep
- if cur, exists := s.names[dep.Ident.LocalName]; exists {
+ dep := cdep.ProjectConstraint
+ if cur, exists := s.names[dep.Ident.ProjectRoot]; exists {
if cur != dep.Ident.netName() {
deps := s.sel.getDependenciesOn(a.a.id)
// Fail all the other deps, as there's no way atom can ever be
@@ -222,15 +239,13 @@
s.fail(d.depender.id)
}
- err := &sourceMismatchFailure{
- shared: dep.Ident.LocalName,
+ return &sourceMismatchFailure{
+ shared: dep.Ident.ProjectRoot,
sel: deps,
current: cur,
mismatch: dep.Ident.netName(),
prob: a.a,
}
- s.logSolve(err)
- return err
}
}
@@ -240,7 +255,7 @@
// checkPackageImportsFromDepExist ensures that, if the dep is already selected,
// the newly-required set of packages being placed on it exist and are valid.
func (s *solver) checkPackageImportsFromDepExist(a atomWithPackages, cdep completeDep) error {
- sel, is := s.sel.selected(cdep.ProjectDep.Ident)
+ sel, is := s.sel.selected(cdep.ProjectConstraint.Ident)
if !is {
// dep is not already selected; nothing to do
return nil
@@ -248,7 +263,7 @@
ptree, err := s.b.listPackages(sel.a.id, sel.a.v)
if err != nil {
- // TODO handle this more gracefully
+ // TODO(sdboyer) handle this more gracefully
return err
}
@@ -272,8 +287,30 @@
}
if len(e.pl) > 0 {
- s.logSolve(e)
return e
}
return nil
}
+
+// checkRevisionExists ensures that if a dependency is constrained by a
+// revision, that that revision actually exists.
+func (s *solver) checkRevisionExists(a atomWithPackages, cdep completeDep) error {
+ r, isrev := cdep.Constraint.(Revision)
+ if !isrev {
+ // Constraint is not a revision; nothing to do
+ return nil
+ }
+
+ present, _ := s.b.revisionPresentIn(cdep.Ident, r)
+ if present {
+ return nil
+ }
+
+ return &nonexistentRevisionFailure{
+ goal: dependency{
+ depender: a.a,
+ dep: cdep,
+ },
+ r: r,
+ }
+}
diff --git a/vendor/github.com/sdboyer/vsolver/selection.go b/vendor/github.com/sdboyer/gps/selection.go
similarity index 92%
rename from vendor/github.com/sdboyer/vsolver/selection.go
rename to vendor/github.com/sdboyer/gps/selection.go
index 9aaac4d..6d84643 100644
--- a/vendor/github.com/sdboyer/vsolver/selection.go
+++ b/vendor/github.com/sdboyer/gps/selection.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
type selection struct {
projects []selected
@@ -60,7 +60,7 @@
// Compute a list of the unique packages within the given ProjectIdentifier that
// have dependers, and the number of dependers they have.
func (s *selection) getRequiredPackagesIn(id ProjectIdentifier) map[string]int {
- // TODO this is horribly inefficient to do on the fly; we need a method to
+ // TODO(sdboyer) this is horribly inefficient to do on the fly; we need a method to
// precompute it on pushing a new dep, and preferably with an immut
// structure so that we can pop with zero cost.
uniq := make(map[string]int)
@@ -82,7 +82,7 @@
// are currently selected, and the number of times each package has been
// independently selected.
func (s *selection) getSelectedPackagesIn(id ProjectIdentifier) map[string]int {
- // TODO this is horribly inefficient to do on the fly; we need a method to
+ // TODO(sdboyer) this is horribly inefficient to do on the fly; we need a method to
// precompute it on pushing a new dep, and preferably with an immut
// structure so that we can pop with zero cost.
uniq := make(map[string]int)
@@ -108,7 +108,7 @@
return any
}
- // TODO recomputing this sucks and is quite wasteful. Precompute/cache it
+ // TODO(sdboyer) recomputing this sucks and is quite wasteful. Precompute/cache it
// on changes to the constraint set, instead.
// The solver itself is expected to maintain the invariant that all the
@@ -141,8 +141,6 @@
return atomWithPackages{a: nilpa}, false
}
-// TODO take a ProjectName, but optionally also a preferred version. This will
-// enable the lock files of dependencies to remain slightly more stable.
type unselected struct {
sl []bimodalIdentifier
cmp func(i, j int) bool
@@ -179,7 +177,6 @@
// The worst case for both of these is O(n), but in practice the first case is
// be O(1), as we iterate the queue from front to back.
func (u *unselected) remove(bmi bimodalIdentifier) {
- // TODO is it worth implementing a binary search here?
for k, pi := range u.sl {
if pi.id.eq(bmi.id) {
// Simple slice comparison - assume they're both sorted the same
diff --git a/vendor/github.com/sdboyer/gps/solve_basic_test.go b/vendor/github.com/sdboyer/gps/solve_basic_test.go
new file mode 100644
index 0000000..055ecc8
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/solve_basic_test.go
@@ -0,0 +1,1362 @@
+package gps
+
+import (
+ "fmt"
+ "regexp"
+ "strings"
+
+ "github.com/Masterminds/semver"
+)
+
+var regfrom = regexp.MustCompile(`^(\w*) from (\w*) ([0-9\.]*)`)
+
+// nvSplit splits an "info" string on " " into the pair of name and
+// version/constraint, and returns each individually.
+//
+// This is for narrow use - panics if there are less than two resulting items in
+// the slice.
+func nvSplit(info string) (id ProjectIdentifier, version string) {
+ if strings.Contains(info, " from ") {
+ parts := regfrom.FindStringSubmatch(info)
+ info = parts[1] + " " + parts[3]
+ id.NetworkName = parts[2]
+ }
+
+ s := strings.SplitN(info, " ", 2)
+ if len(s) < 2 {
+ panic(fmt.Sprintf("Malformed name/version info string '%s'", info))
+ }
+
+ id.ProjectRoot, version = ProjectRoot(s[0]), s[1]
+ if id.NetworkName == "" {
+ id.NetworkName = string(id.ProjectRoot)
+ }
+ return
+}
+
+// nvrSplit splits an "info" string on " " into the triplet of name,
+// version/constraint, and revision, and returns each individually.
+//
+// It will work fine if only name and version/constraint are provided.
+//
+// This is for narrow use - panics if there are less than two resulting items in
+// the slice.
+func nvrSplit(info string) (id ProjectIdentifier, version string, revision Revision) {
+ if strings.Contains(info, " from ") {
+ parts := regfrom.FindStringSubmatch(info)
+ info = parts[1] + " " + parts[3]
+ id.NetworkName = parts[2]
+ }
+
+ s := strings.SplitN(info, " ", 3)
+ if len(s) < 2 {
+ panic(fmt.Sprintf("Malformed name/version info string '%s'", info))
+ }
+
+ id.ProjectRoot, version = ProjectRoot(s[0]), s[1]
+ if id.NetworkName == "" {
+ id.NetworkName = string(id.ProjectRoot)
+ }
+
+ if len(s) == 3 {
+ revision = Revision(s[2])
+ }
+ return
+}
+
+// mkAtom splits the input string on a space, and uses the first two elements as
+// the project identifier and version, respectively.
+//
+// The version segment may have a leading character indicating the type of
+// version to create:
+//
+// p: create a "plain" (non-semver) version.
+// b: create a branch version.
+// r: create a revision.
+//
+// No prefix is assumed to indicate a semver version.
+//
+// If a third space-delimited element is provided, it will be interepreted as a
+// revision, and used as the underlying version in a PairedVersion. No prefix
+// should be provided in this case. It is an error (and will panic) to try to
+// pass a revision with an underlying revision.
+func mkAtom(info string) atom {
+ id, ver, rev := nvrSplit(info)
+
+ var v Version
+ switch ver[0] {
+ case 'r':
+ if rev != "" {
+ panic("Cannot pair a revision with a revision")
+ }
+ v = Revision(ver[1:])
+ case 'p':
+ v = NewVersion(ver[1:])
+ case 'b':
+ v = NewBranch(ver[1:])
+ default:
+ _, err := semver.NewVersion(ver)
+ if err != nil {
+ // don't want to allow bad test data at this level, so just panic
+ panic(fmt.Sprintf("Error when converting '%s' into semver: %s", ver, err))
+ }
+ v = NewVersion(ver)
+ }
+
+ if rev != "" {
+ v = v.(UnpairedVersion).Is(rev)
+ }
+
+ return atom{
+ id: id,
+ v: v,
+ }
+}
+
+// mkPDep splits the input string on a space, and uses the first two elements
+// as the project identifier and constraint body, respectively.
+//
+// The constraint body may have a leading character indicating the type of
+// version to create:
+//
+// p: create a "plain" (non-semver) version.
+// b: create a branch version.
+// r: create a revision.
+//
+// If no leading character is used, a semver constraint is assumed.
+func mkPDep(info string) ProjectConstraint {
+ id, ver, rev := nvrSplit(info)
+
+ var c Constraint
+ switch ver[0] {
+ case 'r':
+ c = Revision(ver[1:])
+ case 'p':
+ c = NewVersion(ver[1:])
+ case 'b':
+ c = NewBranch(ver[1:])
+ default:
+ // Without one of those leading characters, we know it's a proper semver
+ // expression, so use the other parser that doesn't look for a rev
+ rev = ""
+ id, ver = nvSplit(info)
+ var err error
+ c, err = NewSemverConstraint(ver)
+ if err != nil {
+ // don't want bad test data at this level, so just panic
+ panic(fmt.Sprintf("Error when converting '%s' into semver constraint: %s (full info: %s)", ver, err, info))
+ }
+ }
+
+ // There's no practical reason that a real tool would need to produce a
+ // constraint that's a PairedVersion, but it is a possibility admitted by the
+ // system, so we at least allow for it in our testing harness.
+ if rev != "" {
+ // Of course, this *will* panic if the predicate is a revision or a
+ // semver constraint, neither of which implement UnpairedVersion. This
+ // is as intended, to prevent bad data from entering the system.
+ c = c.(UnpairedVersion).Is(rev)
+ }
+
+ return ProjectConstraint{
+ Ident: id,
+ Constraint: c,
+ }
+}
+
+// A depspec is a fixture representing all the information a SourceManager would
+// ordinarily glean directly from interrogating a repository.
+type depspec struct {
+ n ProjectRoot
+ v Version
+ deps []ProjectConstraint
+ devdeps []ProjectConstraint
+ pkgs []tpkg
+}
+
+// mkDepspec creates a depspec by processing a series of strings, each of which
+// contains an identiifer and version information.
+//
+// The first string is broken out into the name and version of the package being
+// described - see the docs on mkAtom for details. subsequent strings are
+// interpreted as dep constraints of that dep at that version. See the docs on
+// mkPDep for details.
+//
+// If a string other than the first includes a "(dev) " prefix, it will be
+// treated as a test-only dependency.
+func mkDepspec(pi string, deps ...string) depspec {
+ pa := mkAtom(pi)
+ if string(pa.id.ProjectRoot) != pa.id.NetworkName {
+ panic("alternate source on self makes no sense")
+ }
+
+ ds := depspec{
+ n: pa.id.ProjectRoot,
+ v: pa.v,
+ }
+
+ for _, dep := range deps {
+ var sl *[]ProjectConstraint
+ if strings.HasPrefix(dep, "(dev) ") {
+ dep = strings.TrimPrefix(dep, "(dev) ")
+ sl = &ds.devdeps
+ } else {
+ sl = &ds.deps
+ }
+
+ *sl = append(*sl, mkPDep(dep))
+ }
+
+ return ds
+}
+
+// mklock makes a fixLock, suitable to act as a lock file
+func mklock(pairs ...string) fixLock {
+ l := make(fixLock, 0)
+ for _, s := range pairs {
+ pa := mkAtom(s)
+ l = append(l, NewLockedProject(pa.id.ProjectRoot, pa.v, pa.id.netName(), nil))
+ }
+
+ return l
+}
+
+// mkrevlock makes a fixLock, suitable to act as a lock file, with only a name
+// and a rev
+func mkrevlock(pairs ...string) fixLock {
+ l := make(fixLock, 0)
+ for _, s := range pairs {
+ pa := mkAtom(s)
+ l = append(l, NewLockedProject(pa.id.ProjectRoot, pa.v.(PairedVersion).Underlying(), pa.id.netName(), nil))
+ }
+
+ return l
+}
+
+// mksolution makes a result set
+func mksolution(pairs ...string) map[string]Version {
+ m := make(map[string]Version)
+ for _, pair := range pairs {
+ a := mkAtom(pair)
+ // TODO(sdboyer) identifierify
+ m[string(a.id.ProjectRoot)] = a.v
+ }
+
+ return m
+}
+
+// computeBasicReachMap takes a depspec and computes a reach map which is
+// identical to the explicit depgraph.
+//
+// Using a reachMap here is overkill for what the basic fixtures actually need,
+// but we use it anyway for congruence with the more general cases.
+func computeBasicReachMap(ds []depspec) reachMap {
+ rm := make(reachMap)
+
+ for k, d := range ds {
+ n := string(d.n)
+ lm := map[string][]string{
+ n: nil,
+ }
+ v := d.v
+ if k == 0 {
+ // Put the root in with a nil rev, to accommodate the solver
+ v = nil
+ }
+ rm[pident{n: d.n, v: v}] = lm
+
+ for _, dep := range d.deps {
+ lm[n] = append(lm[n], string(dep.Ident.ProjectRoot))
+ }
+
+ // first is root
+ if k == 0 {
+ for _, dep := range d.devdeps {
+ lm[n] = append(lm[n], string(dep.Ident.ProjectRoot))
+ }
+ }
+ }
+
+ return rm
+}
+
+type pident struct {
+ n ProjectRoot
+ v Version
+}
+
+type specfix interface {
+ name() string
+ specs() []depspec
+ maxTries() int
+ expectErrs() []string
+ solution() map[string]Version
+}
+
+// A basicFixture is a declarative test fixture that can cover a wide variety of
+// solver cases. All cases, however, maintain one invariant: package == project.
+// There are no subpackages, and so it is impossible for them to trigger or
+// require bimodal solving.
+//
+// This type is separate from bimodalFixture in part for legacy reasons - many
+// of these were adapted from similar tests in dart's pub lib, where there is no
+// such thing as "bimodal solving".
+//
+// But it's also useful to keep them separate because bimodal solving involves
+// considerably more complexity than simple solving, both in terms of fixture
+// declaration and actual solving mechanics. Thus, we gain a lot of value for
+// contributors and maintainers by keeping comprehension costs relatively low
+// while still covering important cases.
+type basicFixture struct {
+ // name of this fixture datum
+ n string
+ // depspecs. always treat first as root
+ ds []depspec
+ // results; map of name/version pairs
+ r map[string]Version
+ // max attempts the solver should need to find solution. 0 means no limit
+ maxAttempts int
+ // Use downgrade instead of default upgrade sorter
+ downgrade bool
+ // lock file simulator, if one's to be used at all
+ l fixLock
+ // projects expected to have errors, if any
+ errp []string
+ // request up/downgrade to all projects
+ changeall bool
+}
+
+func (f basicFixture) name() string {
+ return f.n
+}
+
+func (f basicFixture) specs() []depspec {
+ return f.ds
+}
+
+func (f basicFixture) maxTries() int {
+ return f.maxAttempts
+}
+
+func (f basicFixture) expectErrs() []string {
+ return f.errp
+}
+
+func (f basicFixture) solution() map[string]Version {
+ return f.r
+}
+
+// A table of basicFixtures, used in the basic solving test set.
+var basicFixtures = map[string]basicFixture{
+ // basic fixtures
+ "no dependencies": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0"),
+ },
+ r: mksolution(),
+ },
+ "simple dependency tree": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a 1.0.0", "b 1.0.0"),
+ mkDepspec("a 1.0.0", "aa 1.0.0", "ab 1.0.0"),
+ mkDepspec("aa 1.0.0"),
+ mkDepspec("ab 1.0.0"),
+ mkDepspec("b 1.0.0", "ba 1.0.0", "bb 1.0.0"),
+ mkDepspec("ba 1.0.0"),
+ mkDepspec("bb 1.0.0"),
+ },
+ r: mksolution(
+ "a 1.0.0",
+ "aa 1.0.0",
+ "ab 1.0.0",
+ "b 1.0.0",
+ "ba 1.0.0",
+ "bb 1.0.0",
+ ),
+ },
+ "shared dependency with overlapping constraints": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a 1.0.0", "b 1.0.0"),
+ mkDepspec("a 1.0.0", "shared >=2.0.0, <4.0.0"),
+ mkDepspec("b 1.0.0", "shared >=3.0.0, <5.0.0"),
+ mkDepspec("shared 2.0.0"),
+ mkDepspec("shared 3.0.0"),
+ mkDepspec("shared 3.6.9"),
+ mkDepspec("shared 4.0.0"),
+ mkDepspec("shared 5.0.0"),
+ },
+ r: mksolution(
+ "a 1.0.0",
+ "b 1.0.0",
+ "shared 3.6.9",
+ ),
+ },
+ "downgrade on overlapping constraints": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a 1.0.0", "b 1.0.0"),
+ mkDepspec("a 1.0.0", "shared >=2.0.0, <=4.0.0"),
+ mkDepspec("b 1.0.0", "shared >=3.0.0, <5.0.0"),
+ mkDepspec("shared 2.0.0"),
+ mkDepspec("shared 3.0.0"),
+ mkDepspec("shared 3.6.9"),
+ mkDepspec("shared 4.0.0"),
+ mkDepspec("shared 5.0.0"),
+ },
+ r: mksolution(
+ "a 1.0.0",
+ "b 1.0.0",
+ "shared 3.0.0",
+ ),
+ downgrade: true,
+ },
+ "shared dependency where dependent version in turn affects other dependencies": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo <=1.0.2", "bar 1.0.0"),
+ mkDepspec("foo 1.0.0"),
+ mkDepspec("foo 1.0.1", "bang 1.0.0"),
+ mkDepspec("foo 1.0.2", "whoop 1.0.0"),
+ mkDepspec("foo 1.0.3", "zoop 1.0.0"),
+ mkDepspec("bar 1.0.0", "foo <=1.0.1"),
+ mkDepspec("bang 1.0.0"),
+ mkDepspec("whoop 1.0.0"),
+ mkDepspec("zoop 1.0.0"),
+ },
+ r: mksolution(
+ "foo 1.0.1",
+ "bar 1.0.0",
+ "bang 1.0.0",
+ ),
+ },
+ "removed dependency": {
+ ds: []depspec{
+ mkDepspec("root 1.0.0", "foo 1.0.0", "bar *"),
+ mkDepspec("foo 1.0.0"),
+ mkDepspec("foo 2.0.0"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 2.0.0", "baz 1.0.0"),
+ mkDepspec("baz 1.0.0", "foo 2.0.0"),
+ },
+ r: mksolution(
+ "foo 1.0.0",
+ "bar 1.0.0",
+ ),
+ maxAttempts: 2,
+ },
+ "with mismatched net addrs": {
+ ds: []depspec{
+ mkDepspec("root 1.0.0", "foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.0", "bar from baz 1.0.0"),
+ mkDepspec("bar 1.0.0"),
+ },
+ // TODO(sdboyer) ugh; do real error comparison instead of shitty abstraction
+ errp: []string{"foo", "foo", "root"},
+ },
+ // fixtures with locks
+ "with compatible locked dependency": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mklock(
+ "foo 1.0.1",
+ ),
+ r: mksolution(
+ "foo 1.0.1",
+ "bar 1.0.1",
+ ),
+ },
+ "upgrade through lock": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mklock(
+ "foo 1.0.1",
+ ),
+ r: mksolution(
+ "foo 1.0.2",
+ "bar 1.0.2",
+ ),
+ changeall: true,
+ },
+ "downgrade through lock": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mklock(
+ "foo 1.0.1",
+ ),
+ r: mksolution(
+ "foo 1.0.0",
+ "bar 1.0.0",
+ ),
+ changeall: true,
+ downgrade: true,
+ },
+ "with incompatible locked dependency": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo >1.0.1"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mklock(
+ "foo 1.0.1",
+ ),
+ r: mksolution(
+ "foo 1.0.2",
+ "bar 1.0.2",
+ ),
+ },
+ "with unrelated locked dependency": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ mkDepspec("baz 1.0.0 bazrev"),
+ },
+ l: mklock(
+ "baz 1.0.0 bazrev",
+ ),
+ r: mksolution(
+ "foo 1.0.2",
+ "bar 1.0.2",
+ ),
+ },
+ "unlocks dependencies if necessary to ensure that a new dependency is satisfied": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *", "newdep *"),
+ mkDepspec("foo 1.0.0 foorev", "bar <2.0.0"),
+ mkDepspec("bar 1.0.0 barrev", "baz <2.0.0"),
+ mkDepspec("baz 1.0.0 bazrev", "qux <2.0.0"),
+ mkDepspec("qux 1.0.0 quxrev"),
+ mkDepspec("foo 2.0.0", "bar <3.0.0"),
+ mkDepspec("bar 2.0.0", "baz <3.0.0"),
+ mkDepspec("baz 2.0.0", "qux <3.0.0"),
+ mkDepspec("qux 2.0.0"),
+ mkDepspec("newdep 2.0.0", "baz >=1.5.0"),
+ },
+ l: mklock(
+ "foo 1.0.0 foorev",
+ "bar 1.0.0 barrev",
+ "baz 1.0.0 bazrev",
+ "qux 1.0.0 quxrev",
+ ),
+ r: mksolution(
+ "foo 2.0.0",
+ "bar 2.0.0",
+ "baz 2.0.0",
+ "qux 1.0.0 quxrev",
+ "newdep 2.0.0",
+ ),
+ maxAttempts: 4,
+ },
+ "locked atoms are matched on both local and net name": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *"),
+ mkDepspec("foo 1.0.0 foorev"),
+ mkDepspec("foo 2.0.0 foorev2"),
+ },
+ l: mklock(
+ "foo from baz 1.0.0 foorev",
+ ),
+ r: mksolution(
+ "foo 2.0.0 foorev2",
+ ),
+ },
+ "pairs bare revs in lock with versions": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo ~1.0.1"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1 foorev", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mkrevlock(
+ "foo 1.0.1 foorev", // mkrevlock drops the 1.0.1
+ ),
+ r: mksolution(
+ "foo 1.0.1 foorev",
+ "bar 1.0.1",
+ ),
+ },
+ "pairs bare revs in lock with all versions": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo ~1.0.1"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1 foorev", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2 foorev", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mkrevlock(
+ "foo 1.0.1 foorev", // mkrevlock drops the 1.0.1
+ ),
+ r: mksolution(
+ "foo 1.0.2 foorev",
+ "bar 1.0.1",
+ ),
+ },
+ "does not pair bare revs in manifest with unpaired lock version": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo ~1.0.1"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1 foorev", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mkrevlock(
+ "foo 1.0.1 foorev", // mkrevlock drops the 1.0.1
+ ),
+ r: mksolution(
+ "foo 1.0.1 foorev",
+ "bar 1.0.1",
+ ),
+ },
+ "includes root package's dev dependencies": {
+ ds: []depspec{
+ mkDepspec("root 1.0.0", "(dev) foo 1.0.0", "(dev) bar 1.0.0"),
+ mkDepspec("foo 1.0.0"),
+ mkDepspec("bar 1.0.0"),
+ },
+ r: mksolution(
+ "foo 1.0.0",
+ "bar 1.0.0",
+ ),
+ },
+ "includes dev dependency's transitive dependencies": {
+ ds: []depspec{
+ mkDepspec("root 1.0.0", "(dev) foo 1.0.0"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("bar 1.0.0"),
+ },
+ r: mksolution(
+ "foo 1.0.0",
+ "bar 1.0.0",
+ ),
+ },
+ "ignores transitive dependency's dev dependencies": {
+ ds: []depspec{
+ mkDepspec("root 1.0.0", "(dev) foo 1.0.0"),
+ mkDepspec("foo 1.0.0", "(dev) bar 1.0.0"),
+ mkDepspec("bar 1.0.0"),
+ },
+ r: mksolution(
+ "foo 1.0.0",
+ ),
+ },
+ "no version that matches requirement": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo >=1.0.0, <2.0.0"),
+ mkDepspec("foo 2.0.0"),
+ mkDepspec("foo 2.1.3"),
+ },
+ errp: []string{"foo", "root"},
+ },
+ "no version that matches combined constraint": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.0", "shared >=2.0.0, <3.0.0"),
+ mkDepspec("bar 1.0.0", "shared >=2.9.0, <4.0.0"),
+ mkDepspec("shared 2.5.0"),
+ mkDepspec("shared 3.5.0"),
+ },
+ errp: []string{"shared", "foo", "bar"},
+ },
+ "disjoint constraints": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.0", "shared <=2.0.0"),
+ mkDepspec("bar 1.0.0", "shared >3.0.0"),
+ mkDepspec("shared 2.0.0"),
+ mkDepspec("shared 4.0.0"),
+ },
+ //errp: []string{"shared", "foo", "bar"}, // dart's has this...
+ errp: []string{"foo", "bar"},
+ },
+ "no valid solution": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a *", "b *"),
+ mkDepspec("a 1.0.0", "b 1.0.0"),
+ mkDepspec("a 2.0.0", "b 2.0.0"),
+ mkDepspec("b 1.0.0", "a 2.0.0"),
+ mkDepspec("b 2.0.0", "a 1.0.0"),
+ },
+ errp: []string{"b", "a"},
+ maxAttempts: 2,
+ },
+ "no version that matches while backtracking": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a *", "b >1.0.0"),
+ mkDepspec("a 1.0.0"),
+ mkDepspec("b 1.0.0"),
+ },
+ errp: []string{"b", "root"},
+ },
+ // The latest versions of a and b disagree on c. An older version of either
+ // will resolve the problem. This test validates that b, which is farther
+ // in the dependency graph from myapp is downgraded first.
+ "rolls back leaf versions first": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a *"),
+ mkDepspec("a 1.0.0", "b *"),
+ mkDepspec("a 2.0.0", "b *", "c 2.0.0"),
+ mkDepspec("b 1.0.0"),
+ mkDepspec("b 2.0.0", "c 1.0.0"),
+ mkDepspec("c 1.0.0"),
+ mkDepspec("c 2.0.0"),
+ },
+ r: mksolution(
+ "a 2.0.0",
+ "b 1.0.0",
+ "c 2.0.0",
+ ),
+ maxAttempts: 2,
+ },
+ // Only one version of baz, so foo and bar will have to downgrade until they
+ // reach it.
+ "mutual downgrading": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *"),
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 2.0.0", "bar 2.0.0"),
+ mkDepspec("foo 3.0.0", "bar 3.0.0"),
+ mkDepspec("bar 1.0.0", "baz *"),
+ mkDepspec("bar 2.0.0", "baz 2.0.0"),
+ mkDepspec("bar 3.0.0", "baz 3.0.0"),
+ mkDepspec("baz 1.0.0"),
+ },
+ r: mksolution(
+ "foo 1.0.0",
+ "bar 1.0.0",
+ "baz 1.0.0",
+ ),
+ maxAttempts: 3,
+ },
+ // Ensures the solver doesn't exhaustively search all versions of b when
+ // it's a-2.0.0 whose dependency on c-2.0.0-nonexistent led to the
+ // problem. We make sure b has more versions than a so that the solver
+ // tries a first since it sorts sibling dependencies by number of
+ // versions.
+ "search real failer": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a *", "b *"),
+ mkDepspec("a 1.0.0", "c 1.0.0"),
+ mkDepspec("a 2.0.0", "c 2.0.0"),
+ mkDepspec("b 1.0.0"),
+ mkDepspec("b 2.0.0"),
+ mkDepspec("b 3.0.0"),
+ mkDepspec("c 1.0.0"),
+ },
+ r: mksolution(
+ "a 1.0.0",
+ "b 3.0.0",
+ "c 1.0.0",
+ ),
+ maxAttempts: 2,
+ },
+ // Dependencies are ordered so that packages with fewer versions are tried
+ // first. Here, there are two valid solutions (either a or b must be
+ // downgraded once). The chosen one depends on which dep is traversed first.
+ // Since b has fewer versions, it will be traversed first, which means a
+ // will come later. Since later selections are revised first, a gets
+ // downgraded.
+ "traverse into package with fewer versions first": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a *", "b *"),
+ mkDepspec("a 1.0.0", "c *"),
+ mkDepspec("a 2.0.0", "c *"),
+ mkDepspec("a 3.0.0", "c *"),
+ mkDepspec("a 4.0.0", "c *"),
+ mkDepspec("a 5.0.0", "c 1.0.0"),
+ mkDepspec("b 1.0.0", "c *"),
+ mkDepspec("b 2.0.0", "c *"),
+ mkDepspec("b 3.0.0", "c *"),
+ mkDepspec("b 4.0.0", "c 2.0.0"),
+ mkDepspec("c 1.0.0"),
+ mkDepspec("c 2.0.0"),
+ },
+ r: mksolution(
+ "a 4.0.0",
+ "b 4.0.0",
+ "c 2.0.0",
+ ),
+ maxAttempts: 2,
+ },
+ // This is similar to the preceding fixture. When getting the number of
+ // versions of a package to determine which to traverse first, versions that
+ // are disallowed by the root package's constraints should not be
+ // considered. Here, foo has more versions than bar in total (4), but fewer
+ // that meet myapp"s constraints (only 2). There is no solution, but we will
+ // do less backtracking if foo is tested first.
+ "root constraints pre-eliminate versions": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *", "bar *"),
+ mkDepspec("foo 1.0.0", "none 2.0.0"),
+ mkDepspec("foo 2.0.0", "none 2.0.0"),
+ mkDepspec("foo 3.0.0", "none 2.0.0"),
+ mkDepspec("foo 4.0.0", "none 2.0.0"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 2.0.0"),
+ mkDepspec("bar 3.0.0"),
+ mkDepspec("none 1.0.0"),
+ },
+ errp: []string{"none", "foo"},
+ maxAttempts: 1,
+ },
+ // If there"s a disjoint constraint on a package, then selecting other
+ // versions of it is a waste of time: no possible versions can match. We
+ // need to jump past it to the most recent package that affected the
+ // constraint.
+ "backjump past failed package on disjoint constraint": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "a *", "foo *"),
+ mkDepspec("a 1.0.0", "foo *"),
+ mkDepspec("a 2.0.0", "foo <1.0.0"),
+ mkDepspec("foo 2.0.0"),
+ mkDepspec("foo 2.0.1"),
+ mkDepspec("foo 2.0.2"),
+ mkDepspec("foo 2.0.3"),
+ mkDepspec("foo 2.0.4"),
+ mkDepspec("none 1.0.0"),
+ },
+ r: mksolution(
+ "a 1.0.0",
+ "foo 2.0.4",
+ ),
+ maxAttempts: 2,
+ },
+ // Revision enters vqueue if a dep has a constraint on that revision
+ "revision injected into vqueue": {
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo r123abc"),
+ mkDepspec("foo r123abc"),
+ mkDepspec("foo 1.0.0 foorev"),
+ mkDepspec("foo 2.0.0 foorev2"),
+ },
+ r: mksolution(
+ "foo r123abc",
+ ),
+ },
+ // TODO(sdboyer) decide how to refactor the solver in order to re-enable these.
+ // Checking for revision existence is important...but kinda obnoxious.
+ //{
+ //// Solve fails if revision constraint calls for a nonexistent revision
+ //n: "fail on missing revision",
+ //ds: []depspec{
+ //mkDepspec("root 0.0.0", "bar *"),
+ //mkDepspec("bar 1.0.0", "foo r123abc"),
+ //mkDepspec("foo r123nomatch"),
+ //mkDepspec("foo 1.0.0"),
+ //mkDepspec("foo 2.0.0"),
+ //},
+ //errp: []string{"bar", "foo", "bar"},
+ //},
+ //{
+ //// Solve fails if revision constraint calls for a nonexistent revision,
+ //// even if rev constraint is specified by root
+ //n: "fail on missing revision from root",
+ //ds: []depspec{
+ //mkDepspec("root 0.0.0", "foo r123nomatch"),
+ //mkDepspec("foo r123abc"),
+ //mkDepspec("foo 1.0.0"),
+ //mkDepspec("foo 2.0.0"),
+ //},
+ //errp: []string{"foo", "root", "foo"},
+ //},
+
+ // TODO(sdboyer) add fixture that tests proper handling of loops via aliases (where
+ // a project that wouldn't be a loop is aliased to a project that is a loop)
+}
+
+func init() {
+ // This sets up a hundred versions of foo and bar, 0.0.0 through 9.9.0. Each
+ // version of foo depends on a baz with the same major version. Each version
+ // of bar depends on a baz with the same minor version. There is only one
+ // version of baz, 0.0.0, so only older versions of foo and bar will
+ // satisfy it.
+ fix := basicFixture{
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *", "bar *"),
+ mkDepspec("baz 0.0.0"),
+ },
+ r: mksolution(
+ "foo 0.9.0",
+ "bar 9.0.0",
+ "baz 0.0.0",
+ ),
+ maxAttempts: 10,
+ }
+
+ for i := 0; i < 10; i++ {
+ for j := 0; j < 10; j++ {
+ fix.ds = append(fix.ds, mkDepspec(fmt.Sprintf("foo %v.%v.0", i, j), fmt.Sprintf("baz %v.0.0", i)))
+ fix.ds = append(fix.ds, mkDepspec(fmt.Sprintf("bar %v.%v.0", i, j), fmt.Sprintf("baz 0.%v.0", j)))
+ }
+ }
+
+ basicFixtures["complex backtrack"] = fix
+
+ for k, fix := range basicFixtures {
+ // Assign the name into the fixture itself
+ fix.n = k
+ basicFixtures[k] = fix
+ }
+}
+
+// reachMaps contain externalReach()-type data for a given depspec fixture's
+// universe of proejcts, packages, and versions.
+type reachMap map[pident]map[string][]string
+
+type depspecSourceManager struct {
+ specs []depspec
+ rm reachMap
+ ig map[string]bool
+}
+
+type fixSM interface {
+ SourceManager
+ rootSpec() depspec
+ allSpecs() []depspec
+ ignore() map[string]bool
+}
+
+var _ fixSM = &depspecSourceManager{}
+
+func newdepspecSM(ds []depspec, ignore []string) *depspecSourceManager {
+ ig := make(map[string]bool)
+ if len(ignore) > 0 {
+ for _, pkg := range ignore {
+ ig[pkg] = true
+ }
+ }
+
+ return &depspecSourceManager{
+ specs: ds,
+ rm: computeBasicReachMap(ds),
+ ig: ig,
+ }
+}
+
+func (sm *depspecSourceManager) GetProjectInfo(n ProjectRoot, v Version) (Manifest, Lock, error) {
+ for _, ds := range sm.specs {
+ if n == ds.n && v.Matches(ds.v) {
+ return ds, dummyLock{}, nil
+ }
+ }
+
+ // TODO(sdboyer) proper solver-type errors
+ return nil, nil, fmt.Errorf("Project %s at version %s could not be found", n, v)
+}
+
+func (sm *depspecSourceManager) ExternalReach(n ProjectRoot, v Version) (map[string][]string, error) {
+ id := pident{n: n, v: v}
+ if m, exists := sm.rm[id]; exists {
+ return m, nil
+ }
+ return nil, fmt.Errorf("No reach data for %s at version %s", n, v)
+}
+
+func (sm *depspecSourceManager) ListExternal(n ProjectRoot, v Version) ([]string, error) {
+ // This should only be called for the root
+ id := pident{n: n, v: v}
+ if r, exists := sm.rm[id]; exists {
+ return r[string(n)], nil
+ }
+ return nil, fmt.Errorf("No reach data for %s at version %s", n, v)
+}
+
+func (sm *depspecSourceManager) ListPackages(n ProjectRoot, v Version) (PackageTree, error) {
+ id := pident{n: n, v: v}
+ if r, exists := sm.rm[id]; exists {
+ ptree := PackageTree{
+ ImportRoot: string(n),
+ Packages: map[string]PackageOrErr{
+ string(n): {
+ P: Package{
+ ImportPath: string(n),
+ Name: string(n),
+ Imports: r[string(n)],
+ },
+ },
+ },
+ }
+ return ptree, nil
+ }
+
+ return PackageTree{}, fmt.Errorf("Project %s at version %s could not be found", n, v)
+}
+
+func (sm *depspecSourceManager) ListVersions(name ProjectRoot) (pi []Version, err error) {
+ for _, ds := range sm.specs {
+ // To simulate the behavior of the real SourceManager, we do not return
+ // revisions from ListVersions().
+ if _, isrev := ds.v.(Revision); !isrev && name == ds.n {
+ pi = append(pi, ds.v)
+ }
+ }
+
+ if len(pi) == 0 {
+ err = fmt.Errorf("Project %s could not be found", name)
+ }
+
+ return
+}
+
+func (sm *depspecSourceManager) RevisionPresentIn(name ProjectRoot, r Revision) (bool, error) {
+ for _, ds := range sm.specs {
+ if name == ds.n && r == ds.v {
+ return true, nil
+ }
+ }
+
+ return false, fmt.Errorf("Project %s has no revision %s", name, r)
+}
+
+func (sm *depspecSourceManager) RepoExists(name ProjectRoot) (bool, error) {
+ for _, ds := range sm.specs {
+ if name == ds.n {
+ return true, nil
+ }
+ }
+
+ return false, nil
+}
+
+func (sm *depspecSourceManager) VendorCodeExists(name ProjectRoot) (bool, error) {
+ return false, nil
+}
+
+func (sm *depspecSourceManager) Release() {}
+
+func (sm *depspecSourceManager) ExportProject(n ProjectRoot, v Version, to string) error {
+ return fmt.Errorf("dummy sm doesn't support exporting")
+}
+
+func (sm *depspecSourceManager) rootSpec() depspec {
+ return sm.specs[0]
+}
+
+func (sm *depspecSourceManager) allSpecs() []depspec {
+ return sm.specs
+}
+
+func (sm *depspecSourceManager) ignore() map[string]bool {
+ return sm.ig
+}
+
+type depspecBridge struct {
+ *bridge
+}
+
+// override computeRootReach() on bridge to read directly out of the depspecs
+func (b *depspecBridge) computeRootReach() ([]string, error) {
+ // This only gets called for the root project, so grab that one off the test
+ // source manager
+ dsm := b.sm.(fixSM)
+ root := dsm.rootSpec()
+
+ ptree, err := dsm.ListPackages(root.n, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ return ptree.ListExternalImports(true, true, dsm.ignore()), nil
+}
+
+// override verifyRoot() on bridge to prevent any filesystem interaction
+func (b *depspecBridge) verifyRootDir(path string) error {
+ root := b.sm.(fixSM).rootSpec()
+ if string(root.n) != path {
+ return fmt.Errorf("Expected only root project %q to computeRootReach(), got %q", root.n, path)
+ }
+
+ return nil
+}
+
+func (b *depspecBridge) listPackages(id ProjectIdentifier, v Version) (PackageTree, error) {
+ return b.sm.(fixSM).ListPackages(b.key(id), v)
+}
+
+// override deduceRemoteRepo on bridge to make all our pkg/project mappings work
+// as expected
+func (b *depspecBridge) deduceRemoteRepo(path string) (*remoteRepo, error) {
+ for _, ds := range b.sm.(fixSM).allSpecs() {
+ n := string(ds.n)
+ if path == n || strings.HasPrefix(path, n+"/") {
+ return &remoteRepo{
+ Base: n,
+ RelPkg: strings.TrimPrefix(path, n+"/"),
+ }, nil
+ }
+ }
+ return nil, fmt.Errorf("Could not find %s, or any parent, in list of known fixtures", path)
+}
+
+// enforce interfaces
+var _ Manifest = depspec{}
+var _ Lock = dummyLock{}
+var _ Lock = fixLock{}
+
+// impl Spec interface
+func (ds depspec) DependencyConstraints() []ProjectConstraint {
+ return ds.deps
+}
+
+// impl Spec interface
+func (ds depspec) TestDependencyConstraints() []ProjectConstraint {
+ return ds.devdeps
+}
+
+type fixLock []LockedProject
+
+func (fixLock) SolverVersion() string {
+ return "-1"
+}
+
+// impl Lock interface
+func (fixLock) InputHash() []byte {
+ return []byte("fooooorooooofooorooofoo")
+}
+
+// impl Lock interface
+func (l fixLock) Projects() []LockedProject {
+ return l
+}
+
+type dummyLock struct{}
+
+// impl Lock interface
+func (dummyLock) SolverVersion() string {
+ return "-1"
+}
+
+// impl Lock interface
+func (dummyLock) InputHash() []byte {
+ return []byte("fooooorooooofooorooofoo")
+}
+
+// impl Lock interface
+func (dummyLock) Projects() []LockedProject {
+ return nil
+}
+
+// We've borrowed this bestiary from pub's tests:
+// https://github.com/dart-lang/pub/blob/master/test/version_solver_test.dart
+
+// TODO(sdboyer) finish converting all of these
+
+/*
+func basicGraph() {
+ testResolve("circular dependency", {
+ "myapp 1.0.0": {
+ "foo": "1.0.0"
+ },
+ "foo 1.0.0": {
+ "bar": "1.0.0"
+ },
+ "bar 1.0.0": {
+ "foo": "1.0.0"
+ }
+ }, result: {
+ "myapp from root": "1.0.0",
+ "foo": "1.0.0",
+ "bar": "1.0.0"
+ });
+
+}
+
+func withLockFile() {
+
+}
+
+func rootDependency() {
+ testResolve("with root source", {
+ "myapp 1.0.0": {
+ "foo": "1.0.0"
+ },
+ "foo 1.0.0": {
+ "myapp from root": ">=1.0.0"
+ }
+ }, result: {
+ "myapp from root": "1.0.0",
+ "foo": "1.0.0"
+ });
+
+ testResolve("with different source", {
+ "myapp 1.0.0": {
+ "foo": "1.0.0"
+ },
+ "foo 1.0.0": {
+ "myapp": ">=1.0.0"
+ }
+ }, result: {
+ "myapp from root": "1.0.0",
+ "foo": "1.0.0"
+ });
+
+ testResolve("with wrong version", {
+ "myapp 1.0.0": {
+ "foo": "1.0.0"
+ },
+ "foo 1.0.0": {
+ "myapp": "<1.0.0"
+ }
+ }, error: couldNotSolve);
+}
+
+func unsolvable() {
+
+ testResolve("mismatched descriptions", {
+ "myapp 0.0.0": {
+ "foo": "1.0.0",
+ "bar": "1.0.0"
+ },
+ "foo 1.0.0": {
+ "shared-x": "1.0.0"
+ },
+ "bar 1.0.0": {
+ "shared-y": "1.0.0"
+ },
+ "shared-x 1.0.0": {},
+ "shared-y 1.0.0": {}
+ }, error: descriptionMismatch("shared", "foo", "bar"));
+
+ testResolve("mismatched sources", {
+ "myapp 0.0.0": {
+ "foo": "1.0.0",
+ "bar": "1.0.0"
+ },
+ "foo 1.0.0": {
+ "shared": "1.0.0"
+ },
+ "bar 1.0.0": {
+ "shared from mock2": "1.0.0"
+ },
+ "shared 1.0.0": {},
+ "shared 1.0.0 from mock2": {}
+ }, error: sourceMismatch("shared", "foo", "bar"));
+
+
+
+ // This is a regression test for #18300.
+ testResolve("...", {
+ "myapp 0.0.0": {
+ "angular": "any",
+ "collection": "any"
+ },
+ "analyzer 0.12.2": {},
+ "angular 0.10.0": {
+ "di": ">=0.0.32 <0.1.0",
+ "collection": ">=0.9.1 <1.0.0"
+ },
+ "angular 0.9.11": {
+ "di": ">=0.0.32 <0.1.0",
+ "collection": ">=0.9.1 <1.0.0"
+ },
+ "angular 0.9.10": {
+ "di": ">=0.0.32 <0.1.0",
+ "collection": ">=0.9.1 <1.0.0"
+ },
+ "collection 0.9.0": {},
+ "collection 0.9.1": {},
+ "di 0.0.37": {"analyzer": ">=0.13.0 <0.14.0"},
+ "di 0.0.36": {"analyzer": ">=0.13.0 <0.14.0"}
+ }, error: noVersion(["analyzer", "di"]), maxTries: 2);
+}
+
+func badSource() {
+ testResolve("fail if the root package has a bad source in dep", {
+ "myapp 0.0.0": {
+ "foo from bad": "any"
+ },
+ }, error: unknownSource("myapp", "foo", "bad"));
+
+ testResolve("fail if the root package has a bad source in dev dep", {
+ "myapp 0.0.0": {
+ "(dev) foo from bad": "any"
+ },
+ }, error: unknownSource("myapp", "foo", "bad"));
+
+ testResolve("fail if all versions have bad source in dep", {
+ "myapp 0.0.0": {
+ "foo": "any"
+ },
+ "foo 1.0.0": {
+ "bar from bad": "any"
+ },
+ "foo 1.0.1": {
+ "baz from bad": "any"
+ },
+ "foo 1.0.3": {
+ "bang from bad": "any"
+ },
+ }, error: unknownSource("foo", "bar", "bad"), maxTries: 3);
+
+ testResolve("ignore versions with bad source in dep", {
+ "myapp 1.0.0": {
+ "foo": "any"
+ },
+ "foo 1.0.0": {
+ "bar": "any"
+ },
+ "foo 1.0.1": {
+ "bar from bad": "any"
+ },
+ "foo 1.0.3": {
+ "bar from bad": "any"
+ },
+ "bar 1.0.0": {}
+ }, result: {
+ "myapp from root": "1.0.0",
+ "foo": "1.0.0",
+ "bar": "1.0.0"
+ }, maxTries: 3);
+}
+
+func backtracking() {
+ testResolve("circular dependency on older version", {
+ "myapp 0.0.0": {
+ "a": ">=1.0.0"
+ },
+ "a 1.0.0": {},
+ "a 2.0.0": {
+ "b": "1.0.0"
+ },
+ "b 1.0.0": {
+ "a": "1.0.0"
+ }
+ }, result: {
+ "myapp from root": "0.0.0",
+ "a": "1.0.0"
+ }, maxTries: 2);
+}
+*/
diff --git a/vendor/github.com/sdboyer/vsolver/solve_bimodal_test.go b/vendor/github.com/sdboyer/gps/solve_bimodal_test.go
similarity index 65%
rename from vendor/github.com/sdboyer/vsolver/solve_bimodal_test.go
rename to vendor/github.com/sdboyer/gps/solve_bimodal_test.go
index 21df3cb..09333e0 100644
--- a/vendor/github.com/sdboyer/vsolver/solve_bimodal_test.go
+++ b/vendor/github.com/sdboyer/gps/solve_bimodal_test.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"fmt"
@@ -37,12 +37,12 @@
// including a single, simple import that is not expressed as a constraint
"simple bm-add": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "a")),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a")),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
),
},
@@ -50,77 +50,77 @@
// same path as root, but from a subpkg
"subpkg bm-add": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
),
},
// The same, but with a jump through two subpkgs
"double-subpkg bm-add": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "root/bar"),
pkg("root/bar", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
),
},
// Same again, but now nest the subpkgs
"double nested subpkg bm-add": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "root/foo/bar"),
pkg("root/foo/bar", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
),
},
// Importing package from project with no root package
"bm-add on project with no pkg in root dir": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "a/foo")),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a/foo")),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
),
},
// Import jump is in a dep, and points to a transitive dep
"transitive bm-add": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "b"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
"b 1.0.0",
),
@@ -129,21 +129,21 @@
// reachable import
"constraints activated by import": {
ds: []depspec{
- dsp(dsv("root 0.0.0", "b 1.0.0"),
+ dsp(mkDepspec("root 0.0.0", "b 1.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "b"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b"),
),
- dsp(dsv("b 1.1.0"),
+ dsp(mkDepspec("b 1.1.0"),
pkg("b"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
"b 1.1.0",
),
@@ -152,21 +152,21 @@
// the first version we try
"transitive bm-add on older version": {
ds: []depspec{
- dsp(dsv("root 0.0.0", "a ~1.0.0"),
+ dsp(mkDepspec("root 0.0.0", "a ~1.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "b"),
),
- dsp(dsv("a 1.1.0"),
+ dsp(mkDepspec("a 1.1.0"),
pkg("a"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
"b 1.0.0",
),
@@ -175,28 +175,28 @@
// get there via backtracking
"backtrack to dep on bm-add": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a", "b"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "c"),
),
- dsp(dsv("a 1.1.0"),
+ dsp(mkDepspec("a 1.1.0"),
pkg("a"),
),
// Include two versions of b, otherwise it'll be selected first
- dsp(dsv("b 0.9.0"),
+ dsp(mkDepspec("b 0.9.0"),
pkg("b", "c"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b", "c"),
),
- dsp(dsv("c 1.0.0", "a 1.0.0"),
+ dsp(mkDepspec("c 1.0.0", "a 1.0.0"),
pkg("c", "a"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
"b 1.0.0",
"c 1.0.0",
@@ -205,19 +205,19 @@
// Import jump is in a dep subpkg, and points to a transitive dep
"transitive subpkg bm-add": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "a/bar"),
pkg("a/bar", "b"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
"b 1.0.0",
),
@@ -226,23 +226,23 @@
// not the first version we try
"transitive subpkg bm-add on older version": {
ds: []depspec{
- dsp(dsv("root 0.0.0", "a ~1.0.0"),
+ dsp(mkDepspec("root 0.0.0", "a ~1.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "a/bar"),
pkg("a/bar", "b"),
),
- dsp(dsv("a 1.1.0"),
+ dsp(mkDepspec("a 1.1.0"),
pkg("a", "a/bar"),
pkg("a/bar"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
"b 1.0.0",
),
@@ -252,38 +252,38 @@
// is not part of the solution.
"ignore constraint without import": {
ds: []depspec{
- dsp(dsv("root 0.0.0", "a 1.0.0"),
+ dsp(mkDepspec("root 0.0.0", "a 1.0.0"),
pkg("root", "root/foo"),
pkg("root/foo"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a"),
),
},
- r: mkresults(),
+ r: mksolution(),
},
// Transitive deps from one project (a) get incrementally included as other
// deps incorporate its various packages.
"multi-stage pkg incorporation": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "a", "d"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "b"),
pkg("a/second", "c"),
),
- dsp(dsv("b 2.0.0"),
+ dsp(mkDepspec("b 2.0.0"),
pkg("b"),
),
- dsp(dsv("c 1.2.0"),
+ dsp(mkDepspec("c 1.2.0"),
pkg("c"),
),
- dsp(dsv("d 1.0.0"),
+ dsp(mkDepspec("d 1.0.0"),
pkg("d", "a/second"),
),
},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
"b 2.0.0",
"c 1.2.0",
@@ -295,17 +295,17 @@
// present.
"radix path separator post-check": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "foo", "foobar"),
),
- dsp(dsv("foo 1.0.0"),
+ dsp(mkDepspec("foo 1.0.0"),
pkg("foo"),
),
- dsp(dsv("foobar 1.0.0"),
+ dsp(mkDepspec("foobar 1.0.0"),
pkg("foobar"),
),
},
- r: mkresults(
+ r: mksolution(
"foo 1.0.0",
"foobar 1.0.0",
),
@@ -313,10 +313,10 @@
// Well-formed failure when there's a dependency on a pkg that doesn't exist
"fail when imports nonexistent package": {
ds: []depspec{
- dsp(dsv("root 0.0.0", "a 1.0.0"),
+ dsp(mkDepspec("root 0.0.0", "a 1.0.0"),
pkg("root", "a/foo"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a"),
),
},
@@ -327,20 +327,20 @@
// discover one incrementally that isn't present
"fail multi-stage missing pkg": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "a", "d"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "b"),
pkg("a/second", "c"),
),
- dsp(dsv("b 2.0.0"),
+ dsp(mkDepspec("b 2.0.0"),
pkg("b"),
),
- dsp(dsv("c 1.2.0"),
+ dsp(mkDepspec("c 1.2.0"),
pkg("c"),
),
- dsp(dsv("d 1.0.0"),
+ dsp(mkDepspec("d 1.0.0"),
pkg("d", "a/second"),
pkg("d", "a/nonexistent"),
),
@@ -350,43 +350,122 @@
// Check ignores on the root project
"ignore in double-subpkg": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "root/bar", "b"),
pkg("root/bar", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b"),
),
},
ignore: []string{"root/bar"},
- r: mkresults(
+ r: mksolution(
"b 1.0.0",
),
},
// Ignores on a dep pkg
"ignore through dep pkg": {
ds: []depspec{
- dsp(dsv("root 0.0.0"),
+ dsp(mkDepspec("root 0.0.0"),
pkg("root", "root/foo"),
pkg("root/foo", "a"),
),
- dsp(dsv("a 1.0.0"),
+ dsp(mkDepspec("a 1.0.0"),
pkg("a", "a/bar"),
pkg("a/bar", "b"),
),
- dsp(dsv("b 1.0.0"),
+ dsp(mkDepspec("b 1.0.0"),
pkg("b"),
),
},
ignore: []string{"a/bar"},
- r: mkresults(
+ r: mksolution(
"a 1.0.0",
),
},
+ // Preferred version, as derived from a dep's lock, is attempted first
+ "respect prefv, simple case": {
+ ds: []depspec{
+ dsp(mkDepspec("root 0.0.0"),
+ pkg("root", "a")),
+ dsp(mkDepspec("a 1.0.0"),
+ pkg("a", "b")),
+ dsp(mkDepspec("b 1.0.0 foorev"),
+ pkg("b")),
+ dsp(mkDepspec("b 2.0.0 barrev"),
+ pkg("b")),
+ },
+ lm: map[string]fixLock{
+ "a 1.0.0": mklock(
+ "b 1.0.0 foorev",
+ ),
+ },
+ r: mksolution(
+ "a 1.0.0",
+ "b 1.0.0 foorev",
+ ),
+ },
+ // Preferred version, as derived from a dep's lock, is attempted first, even
+ // if the root also has a direct dep on it (root doesn't need to use
+ // preferreds, because it has direct control AND because the root lock
+ // already supercedes dep lock "preferences")
+ "respect dep prefv with root import": {
+ ds: []depspec{
+ dsp(mkDepspec("root 0.0.0"),
+ pkg("root", "a", "b")),
+ dsp(mkDepspec("a 1.0.0"),
+ pkg("a", "b")),
+ //dsp(newDepspec("a 1.0.1"),
+ //pkg("a", "b")),
+ //dsp(newDepspec("a 1.1.0"),
+ //pkg("a", "b")),
+ dsp(mkDepspec("b 1.0.0 foorev"),
+ pkg("b")),
+ dsp(mkDepspec("b 2.0.0 barrev"),
+ pkg("b")),
+ },
+ lm: map[string]fixLock{
+ "a 1.0.0": mklock(
+ "b 1.0.0 foorev",
+ ),
+ },
+ r: mksolution(
+ "a 1.0.0",
+ "b 1.0.0 foorev",
+ ),
+ },
+ // Preferred versions can only work if the thing offering it has been
+ // selected, or at least marked in the unselected queue
+ "prefv only works if depper is selected": {
+ ds: []depspec{
+ dsp(mkDepspec("root 0.0.0"),
+ pkg("root", "a", "b")),
+ // Three atoms for a, which will mean it gets visited after b
+ dsp(mkDepspec("a 1.0.0"),
+ pkg("a", "b")),
+ dsp(mkDepspec("a 1.0.1"),
+ pkg("a", "b")),
+ dsp(mkDepspec("a 1.1.0"),
+ pkg("a", "b")),
+ dsp(mkDepspec("b 1.0.0 foorev"),
+ pkg("b")),
+ dsp(mkDepspec("b 2.0.0 barrev"),
+ pkg("b")),
+ },
+ lm: map[string]fixLock{
+ "a 1.0.0": mklock(
+ "b 1.0.0 foorev",
+ ),
+ },
+ r: mksolution(
+ "a 1.1.0",
+ "b 2.0.0 barrev",
+ ),
+ },
}
// tpkg is a representation of a single package. It has its own import path, as
@@ -411,6 +490,9 @@
downgrade bool
// lock file simulator, if one's to be used at all
l fixLock
+ // map of locks for deps, if any. keys should be of the form:
+ // "<project> <version>"
+ lm map[string]fixLock
// projects expected to have errors, if any
errp []string
// request up/downgrade to all projects
@@ -435,29 +517,31 @@
return f.errp
}
-func (f bimodalFixture) result() map[string]Version {
+func (f bimodalFixture) solution() map[string]Version {
return f.r
}
// bmSourceManager is an SM specifically for the bimodal fixtures. It composes
-// the general depspec SM, and differs from it only in the way that it answers
-// some static analysis-type calls.
+// the general depspec SM, and differs from it in how it answers static analysis
+// calls, and its support for package ignores and dep lock data.
type bmSourceManager struct {
depspecSourceManager
+ lm map[string]fixLock
}
var _ SourceManager = &bmSourceManager{}
-func newbmSM(ds []depspec, ignore []string) *bmSourceManager {
+func newbmSM(bmf bimodalFixture) *bmSourceManager {
sm := &bmSourceManager{
- depspecSourceManager: *newdepspecSM(ds, ignore),
+ depspecSourceManager: *newdepspecSM(bmf.ds, bmf.ignore),
}
- sm.rm = computeBimodalExternalMap(ds)
+ sm.rm = computeBimodalExternalMap(bmf.ds)
+ sm.lm = bmf.lm
return sm
}
-func (sm *bmSourceManager) ListPackages(n ProjectName, v Version) (PackageTree, error) {
+func (sm *bmSourceManager) ListPackages(n ProjectRoot, v Version) (PackageTree, error) {
for k, ds := range sm.specs {
// Cheat for root, otherwise we blow up b/c version is empty
if n == ds.n && (k == 0 || ds.v.Matches(v)) {
@@ -482,6 +566,20 @@
return PackageTree{}, fmt.Errorf("Project %s at version %s could not be found", n, v)
}
+func (sm *bmSourceManager) GetProjectInfo(n ProjectRoot, v Version) (Manifest, Lock, error) {
+ for _, ds := range sm.specs {
+ if n == ds.n && v.Matches(ds.v) {
+ if l, exists := sm.lm[string(n)+" "+v.String()]; exists {
+ return ds, l, nil
+ }
+ return ds, dummyLock{}, nil
+ }
+ }
+
+ // TODO(sdboyer) proper solver-type errors
+ return nil, nil, fmt.Errorf("Project %s at version %s could not be found", n, v)
+}
+
// computeBimodalExternalMap takes a set of depspecs and computes an
// internally-versioned external reach map that is useful for quickly answering
// ListExternal()-type calls.
@@ -508,39 +606,36 @@
}
w := wm{
- ex: make(map[string]struct{}),
- in: make(map[string]struct{}),
+ ex: make(map[string]bool),
+ in: make(map[string]bool),
}
for _, imp := range pkg.imports {
if !checkPrefixSlash(filepath.Clean(imp), string(d.n)) {
// Easy case - if the import is not a child of the base
// project path, put it in the external map
- w.ex[imp] = struct{}{}
+ w.ex[imp] = true
} else {
if w2, seen := workmap[imp]; seen {
// If it is, and we've seen that path, dereference it
// immediately
for i := range w2.ex {
- w.ex[i] = struct{}{}
+ w.ex[i] = true
}
for i := range w2.in {
- w.in[i] = struct{}{}
+ w.in[i] = true
}
} else {
// Otherwise, put it in the 'in' map for later
// reprocessing
- w.in[imp] = struct{}{}
+ w.in[imp] = true
}
}
}
workmap[pkg.path] = w
}
- drm, err := wmToReach(workmap, "")
- if err != nil {
- panic(err)
- }
+ drm := wmToReach(workmap, "")
rm[pident{n: d.n, v: d.v}] = drm
}
diff --git a/vendor/github.com/sdboyer/gps/solve_test.go b/vendor/github.com/sdboyer/gps/solve_test.go
new file mode 100644
index 0000000..95db023
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/solve_test.go
@@ -0,0 +1,440 @@
+package gps
+
+import (
+ "flag"
+ "fmt"
+ "io/ioutil"
+ "log"
+ "math/rand"
+ "os"
+ "reflect"
+ "sort"
+ "strconv"
+ "strings"
+ "testing"
+)
+
+var fixtorun string
+
+// TODO(sdboyer) regression test ensuring that locks with only revs for projects don't cause errors
+func init() {
+ flag.StringVar(&fixtorun, "gps.fix", "", "A single fixture to run in TestBasicSolves")
+ overrideMkBridge()
+}
+
+// sets the mkBridge global func to one that allows virtualized RootDirs
+func overrideMkBridge() {
+ // For all tests, override the base bridge with the depspecBridge that skips
+ // verifyRootDir calls
+ mkBridge = func(s *solver, sm SourceManager) sourceBridge {
+ return &depspecBridge{
+ &bridge{
+ sm: sm,
+ s: s,
+ vlists: make(map[ProjectRoot][]Version),
+ },
+ }
+ }
+}
+
+var stderrlog = log.New(os.Stderr, "", 0)
+
+func fixSolve(params SolveParameters, sm SourceManager) (Solution, error) {
+ if testing.Verbose() {
+ params.Trace = true
+ params.TraceLogger = stderrlog
+ }
+
+ s, err := Prepare(params, sm)
+ if err != nil {
+ return nil, err
+ }
+
+ return s.Solve()
+}
+
+// Test all the basic table fixtures.
+//
+// Or, just the one named in the fix arg.
+func TestBasicSolves(t *testing.T) {
+ if fixtorun != "" {
+ if fix, exists := basicFixtures[fixtorun]; exists {
+ solveBasicsAndCheck(fix, t)
+ }
+ } else {
+ // sort them by their keys so we get stable output
+ var names []string
+ for n := range basicFixtures {
+ names = append(names, n)
+ }
+
+ sort.Strings(names)
+ for _, n := range names {
+ solveBasicsAndCheck(basicFixtures[n], t)
+ if testing.Verbose() {
+ // insert a line break between tests
+ stderrlog.Println("")
+ }
+ }
+ }
+}
+
+func solveBasicsAndCheck(fix basicFixture, t *testing.T) (res Solution, err error) {
+ if testing.Verbose() {
+ stderrlog.Printf("[[fixture %q]]", fix.n)
+ }
+ sm := newdepspecSM(fix.ds, nil)
+
+ params := SolveParameters{
+ RootDir: string(fix.ds[0].n),
+ ImportRoot: ProjectRoot(fix.ds[0].n),
+ Manifest: fix.ds[0],
+ Lock: dummyLock{},
+ Downgrade: fix.downgrade,
+ ChangeAll: fix.changeall,
+ }
+
+ if fix.l != nil {
+ params.Lock = fix.l
+ }
+
+ res, err = fixSolve(params, sm)
+
+ return fixtureSolveSimpleChecks(fix, res, err, t)
+}
+
+// Test all the bimodal table fixtures.
+//
+// Or, just the one named in the fix arg.
+func TestBimodalSolves(t *testing.T) {
+ if fixtorun != "" {
+ if fix, exists := bimodalFixtures[fixtorun]; exists {
+ solveBimodalAndCheck(fix, t)
+ }
+ } else {
+ // sort them by their keys so we get stable output
+ var names []string
+ for n := range bimodalFixtures {
+ names = append(names, n)
+ }
+
+ sort.Strings(names)
+ for _, n := range names {
+ solveBimodalAndCheck(bimodalFixtures[n], t)
+ if testing.Verbose() {
+ // insert a line break between tests
+ stderrlog.Println("")
+ }
+ }
+ }
+}
+
+func solveBimodalAndCheck(fix bimodalFixture, t *testing.T) (res Solution, err error) {
+ if testing.Verbose() {
+ stderrlog.Printf("[[fixture %q]]", fix.n)
+ }
+ sm := newbmSM(fix)
+
+ params := SolveParameters{
+ RootDir: string(fix.ds[0].n),
+ ImportRoot: ProjectRoot(fix.ds[0].n),
+ Manifest: fix.ds[0],
+ Lock: dummyLock{},
+ Ignore: fix.ignore,
+ Downgrade: fix.downgrade,
+ ChangeAll: fix.changeall,
+ }
+
+ if fix.l != nil {
+ params.Lock = fix.l
+ }
+
+ res, err = fixSolve(params, sm)
+
+ return fixtureSolveSimpleChecks(fix, res, err, t)
+}
+
+func fixtureSolveSimpleChecks(fix specfix, res Solution, err error, t *testing.T) (Solution, error) {
+ if err != nil {
+ errp := fix.expectErrs()
+ if len(errp) == 0 {
+ t.Errorf("(fixture: %q) Solver failed; error was type %T, text:\n%s", fix.name(), err, err)
+ return res, err
+ }
+
+ switch fail := err.(type) {
+ case *badOptsFailure:
+ t.Errorf("(fixture: %q) Unexpected bad opts failure solve error: %s", fix.name(), err)
+ case *noVersionError:
+ if errp[0] != string(fail.pn.ProjectRoot) { // TODO(sdboyer) identifierify
+ t.Errorf("(fixture: %q) Expected failure on project %s, but was on project %s", fix.name(), errp[0], fail.pn.ProjectRoot)
+ }
+
+ ep := make(map[string]struct{})
+ for _, p := range errp[1:] {
+ ep[p] = struct{}{}
+ }
+
+ found := make(map[string]struct{})
+ for _, vf := range fail.fails {
+ for _, f := range getFailureCausingProjects(vf.f) {
+ found[f] = struct{}{}
+ }
+ }
+
+ var missing []string
+ var extra []string
+ for p := range found {
+ if _, has := ep[p]; !has {
+ extra = append(extra, p)
+ }
+ }
+ if len(extra) > 0 {
+ t.Errorf("(fixture: %q) Expected solve failures due to projects %s, but solve failures also arose from %s", fix.name(), strings.Join(errp[1:], ", "), strings.Join(extra, ", "))
+ }
+
+ for p := range ep {
+ if _, has := found[p]; !has {
+ missing = append(missing, p)
+ }
+ }
+ if len(missing) > 0 {
+ t.Errorf("(fixture: %q) Expected solve failures due to projects %s, but %s had no failures", fix.name(), strings.Join(errp[1:], ", "), strings.Join(missing, ", "))
+ }
+
+ default:
+ // TODO(sdboyer) round these out
+ panic(fmt.Sprintf("unhandled solve failure type: %s", err))
+ }
+ } else if len(fix.expectErrs()) > 0 {
+ t.Errorf("(fixture: %q) Solver succeeded, but expected failure", fix.name())
+ } else {
+ r := res.(solution)
+ if fix.maxTries() > 0 && r.Attempts() > fix.maxTries() {
+ t.Errorf("(fixture: %q) Solver completed in %v attempts, but expected %v or fewer", fix.name(), r.att, fix.maxTries())
+ }
+
+ // Dump result projects into a map for easier interrogation
+ rp := make(map[string]Version)
+ for _, p := range r.p {
+ pa := p.toAtom()
+ rp[string(pa.id.ProjectRoot)] = pa.v
+ }
+
+ fixlen, rlen := len(fix.solution()), len(rp)
+ if fixlen != rlen {
+ // Different length, so they definitely disagree
+ t.Errorf("(fixture: %q) Solver reported %v package results, result expected %v", fix.name(), rlen, fixlen)
+ }
+
+ // Whether or not len is same, still have to verify that results agree
+ // Walk through fixture/expected results first
+ for p, v := range fix.solution() {
+ if av, exists := rp[p]; !exists {
+ t.Errorf("(fixture: %q) Project %q expected but missing from results", fix.name(), p)
+ } else {
+ // delete result from map so we skip it on the reverse pass
+ delete(rp, p)
+ if v != av {
+ t.Errorf("(fixture: %q) Expected version %q of project %q, but actual version was %q", fix.name(), v, p, av)
+ }
+ }
+ }
+
+ // Now walk through remaining actual results
+ for p, v := range rp {
+ if fv, exists := fix.solution()[p]; !exists {
+ t.Errorf("(fixture: %q) Unexpected project %q present in results", fix.name(), p)
+ } else if v != fv {
+ t.Errorf("(fixture: %q) Got version %q of project %q, but expected version was %q", fix.name(), v, p, fv)
+ }
+ }
+ }
+
+ return res, err
+}
+
+// This tests that, when a root lock is underspecified (has only a version) we
+// don't allow a match on that version from a rev in the manifest. We may allow
+// this in the future, but disallow it for now because going from an immutable
+// requirement to a mutable lock automagically is a bad direction that could
+// produce weird side effects.
+func TestRootLockNoVersionPairMatching(t *testing.T) {
+ fix := basicFixture{
+ n: "does not pair bare revs in manifest with unpaired lock version",
+ ds: []depspec{
+ mkDepspec("root 0.0.0", "foo *"), // foo's constraint rewritten below to foorev
+ mkDepspec("foo 1.0.0", "bar 1.0.0"),
+ mkDepspec("foo 1.0.1 foorev", "bar 1.0.1"),
+ mkDepspec("foo 1.0.2 foorev", "bar 1.0.2"),
+ mkDepspec("bar 1.0.0"),
+ mkDepspec("bar 1.0.1"),
+ mkDepspec("bar 1.0.2"),
+ },
+ l: mklock(
+ "foo 1.0.1",
+ ),
+ r: mksolution(
+ "foo 1.0.2 foorev",
+ "bar 1.0.1",
+ ),
+ }
+
+ pd := fix.ds[0].deps[0]
+ pd.Constraint = Revision("foorev")
+ fix.ds[0].deps[0] = pd
+
+ sm := newdepspecSM(fix.ds, nil)
+
+ l2 := make(fixLock, 1)
+ copy(l2, fix.l)
+ l2[0].v = nil
+
+ params := SolveParameters{
+ RootDir: string(fix.ds[0].n),
+ ImportRoot: ProjectRoot(fix.ds[0].n),
+ Manifest: fix.ds[0],
+ Lock: l2,
+ }
+
+ res, err := fixSolve(params, sm)
+
+ fixtureSolveSimpleChecks(fix, res, err, t)
+}
+
+func getFailureCausingProjects(err error) (projs []string) {
+ switch e := err.(type) {
+ case *noVersionError:
+ projs = append(projs, string(e.pn.ProjectRoot)) // TODO(sdboyer) identifierify
+ case *disjointConstraintFailure:
+ for _, f := range e.failsib {
+ projs = append(projs, string(f.depender.id.ProjectRoot))
+ }
+ case *versionNotAllowedFailure:
+ for _, f := range e.failparent {
+ projs = append(projs, string(f.depender.id.ProjectRoot))
+ }
+ case *constraintNotAllowedFailure:
+ // No sane way of knowing why the currently selected version is
+ // selected, so do nothing
+ case *sourceMismatchFailure:
+ projs = append(projs, string(e.prob.id.ProjectRoot))
+ for _, c := range e.sel {
+ projs = append(projs, string(c.depender.id.ProjectRoot))
+ }
+ case *checkeeHasProblemPackagesFailure:
+ projs = append(projs, string(e.goal.id.ProjectRoot))
+ for _, errdep := range e.failpkg {
+ for _, atom := range errdep.deppers {
+ projs = append(projs, string(atom.id.ProjectRoot))
+ }
+ }
+ case *depHasProblemPackagesFailure:
+ projs = append(projs, string(e.goal.depender.id.ProjectRoot), string(e.goal.dep.Ident.ProjectRoot))
+ case *nonexistentRevisionFailure:
+ projs = append(projs, string(e.goal.depender.id.ProjectRoot), string(e.goal.dep.Ident.ProjectRoot))
+ default:
+ panic(fmt.Sprintf("unknown failtype %T, msg: %s", err, err))
+ }
+
+ return
+}
+
+func TestBadSolveOpts(t *testing.T) {
+ pn := strconv.FormatInt(rand.Int63(), 36)
+ fix := basicFixtures["no dependencies"]
+ fix.ds[0].n = ProjectRoot(pn)
+
+ sm := newdepspecSM(fix.ds, nil)
+ params := SolveParameters{}
+
+ _, err := Prepare(params, nil)
+ if err == nil {
+ t.Errorf("Prepare should have errored on nil SourceManager")
+ } else if !strings.Contains(err.Error(), "non-nil SourceManager") {
+ t.Error("Prepare should have given error on nil SourceManager, but gave:", err)
+ }
+
+ _, err = Prepare(params, sm)
+ if err == nil {
+ t.Errorf("Prepare should have errored on empty root")
+ } else if !strings.Contains(err.Error(), "non-empty root directory") {
+ t.Error("Prepare should have given error on empty root, but gave:", err)
+ }
+
+ params.RootDir = pn
+ _, err = Prepare(params, sm)
+ if err == nil {
+ t.Errorf("Prepare should have errored on empty name")
+ } else if !strings.Contains(err.Error(), "non-empty import root") {
+ t.Error("Prepare should have given error on empty import root, but gave:", err)
+ }
+
+ params.ImportRoot = ProjectRoot(pn)
+ params.Trace = true
+ _, err = Prepare(params, sm)
+ if err == nil {
+ t.Errorf("Should have errored on trace with no logger")
+ } else if !strings.Contains(err.Error(), "no logger provided") {
+ t.Error("Prepare should have given error on missing trace logger, but gave:", err)
+ }
+
+ params.TraceLogger = log.New(ioutil.Discard, "", 0)
+ _, err = Prepare(params, sm)
+ if err != nil {
+ t.Error("Basic conditions satisfied, prepare should have completed successfully, err as:", err)
+ }
+
+ // swap out the test mkBridge override temporarily, just to make sure we get
+ // the right error
+ mkBridge = func(s *solver, sm SourceManager) sourceBridge {
+ return &bridge{
+ sm: sm,
+ s: s,
+ vlists: make(map[ProjectRoot][]Version),
+ }
+ }
+
+ _, err = Prepare(params, sm)
+ if err == nil {
+ t.Errorf("Should have errored on nonexistent root")
+ } else if !strings.Contains(err.Error(), "could not read project root") {
+ t.Error("Prepare should have given error nonexistent project root dir, but gave:", err)
+ }
+
+ // Pointing it at a file should also be an err
+ params.RootDir = "solve_test.go"
+ _, err = Prepare(params, sm)
+ if err == nil {
+ t.Errorf("Should have errored on file for RootDir")
+ } else if !strings.Contains(err.Error(), "is a file, not a directory") {
+ t.Error("Prepare should have given error on file as RootDir, but gave:", err)
+ }
+
+ // swap them back...not sure if this matters, but just in case
+ overrideMkBridge()
+}
+
+func TestIgnoreDedupe(t *testing.T) {
+ fix := basicFixtures["no dependencies"]
+
+ ig := []string{"foo", "foo", "bar"}
+ params := SolveParameters{
+ RootDir: string(fix.ds[0].n),
+ ImportRoot: ProjectRoot(fix.ds[0].n),
+ Manifest: fix.ds[0],
+ Ignore: ig,
+ }
+
+ s, _ := Prepare(params, newdepspecSM(basicFixtures["no dependencies"].ds, nil))
+ ts := s.(*solver)
+
+ expect := map[string]bool{
+ "foo": true,
+ "bar": true,
+ }
+
+ if !reflect.DeepEqual(ts.ig, expect) {
+ t.Errorf("Expected solver's ignore list to be deduplicated map, got %v", ts.ig)
+ }
+}
diff --git a/vendor/github.com/sdboyer/vsolver/solver.go b/vendor/github.com/sdboyer/gps/solver.go
similarity index 69%
rename from vendor/github.com/sdboyer/vsolver/solver.go
rename to vendor/github.com/sdboyer/gps/solver.go
index 0ea3dbe..121bc81 100644
--- a/vendor/github.com/sdboyer/vsolver/solver.go
+++ b/vendor/github.com/sdboyer/gps/solver.go
@@ -1,39 +1,53 @@
-package vsolver
+package gps
import (
"container/heap"
"fmt"
"log"
- "math/rand"
"os"
"sort"
- "strconv"
"strings"
"github.com/armon/go-radix"
)
-var (
- // With a random revision and no name, collisions are unlikely
- nilpa = atom{
- v: Revision(strconv.FormatInt(rand.Int63(), 36)),
- }
-)
+// SolveParameters hold all arguments to a solver run.
+//
+// Only RootDir and ImportRoot are absolutely required. A nil Manifest is
+// allowed, though it usually makes little sense.
+//
+// Of these properties, only Manifest and Ignore are (directly) incorporated in
+// memoization hashing.
+type SolveParameters struct {
+ // The path to the root of the project on which the solver should operate.
+ // This should point to the directory that should contain the vendor/
+ // directory.
+ //
+ // In general, it is wise for this to be under an active GOPATH, though it
+ // is not (currently) required.
+ //
+ // A real path to a readable directory is required.
+ RootDir string
-// SolveArgs comprise the required inputs for a Solve run.
-type SolveArgs struct {
- // The path to the root of the project on which the solver is working.
- Root string
+ // The import path at the base of all import paths covered by the project.
+ // For example, the appropriate value for gps itself here is:
+ //
+ // github.com/sdboyer/gps
+ //
+ // In most cases, this should match the latter portion of RootDir. However,
+ // that is not (currently) required.
+ //
+ // A non-empty string is required.
+ ImportRoot ProjectRoot
- // The 'name' of the project. Required. This should (must?) correspond to subpath of
- // Root that exists under a GOPATH.
- Name ProjectName
-
- // The root manifest. Required. This contains all the dependencies, constraints, and
+ // The root manifest. This contains all the dependencies, constraints, and
// other controls available to the root project.
+ //
+ // May be nil, but for most cases, that would be unwise.
Manifest Manifest
- // The root lock. Optional. Generally, this lock is the output of a previous solve run.
+ // The root lock. Optional. Generally, this lock is the output of a previous
+ // solve run.
//
// If provided, the solver will attempt to preserve the versions specified
// in the lock, unless ToChange or ChangeAll settings indicate otherwise.
@@ -43,21 +57,6 @@
// project, or from elsewhere. Ignoring a package means that both it and its
// imports will be disregarded by all relevant solver operations.
Ignore []string
-}
-
-// SolveOpts holds additional options that govern solving behavior.
-type SolveOpts struct {
- // Downgrade indicates whether the solver will attempt to upgrade (false) or
- // downgrade (true) projects that are not locked, or are marked for change.
- //
- // Upgrading is, by far, the most typical case. The field is named
- // 'Downgrade' so that the bool's zero value corresponds to that most
- // typical case.
- Downgrade bool
-
- // ChangeAll indicates that all projects should be changed - that is, any
- // versions specified in the root lock file should be ignored.
- ChangeAll bool
// ToChange is a list of project names that should be changed - that is, any
// versions specified for those projects in the root lock file should be
@@ -66,7 +65,19 @@
// Passing ChangeAll has subtly different behavior from enumerating all
// projects into ToChange. In general, ToChange should *only* be used if the
// user expressly requested an upgrade for a specific project.
- ToChange []ProjectName
+ ToChange []ProjectRoot
+
+ // ChangeAll indicates that all projects should be changed - that is, any
+ // versions specified in the root lock file should be ignored.
+ ChangeAll bool
+
+ // Downgrade indicates whether the solver will attempt to upgrade (false) or
+ // downgrade (true) projects that are not locked, or are marked for change.
+ //
+ // Upgrading is, by far, the most typical case. The field is named
+ // 'Downgrade' so that the bool's zero value corresponds to that most
+ // typical case.
+ Downgrade bool
// Trace controls whether the solver will generate informative trace output
// as it moves through the solving process.
@@ -85,13 +96,13 @@
// starts moving forward again.
attempts int
- // SolveArgs are the essential inputs to the solver. The solver will abort
- // early if these options are not appropriately set.
- args SolveArgs
-
- // SolveOpts are the configuration options provided to the solver. The
- // solver will abort early if certain options are not appropriately set.
- o SolveOpts
+ // SolveParameters are the inputs to the solver. They determine both what
+ // data the solver should operate on, and certain aspects of how solving
+ // proceeds.
+ //
+ // Prepare() validates these, so by the time we have a *solver instance, we
+ // know they're valid.
+ params SolveParameters
// Logger used exclusively for trace output, if the trace option is set.
tl *log.Logger
@@ -102,8 +113,9 @@
// names a SourceManager operates on.
b sourceBridge
- // The list of projects currently "selected" - that is, they have passed all
- // satisfiability checks, and are part of the current solution.
+ // A stack containing projects and packages that are currently "selected" -
+ // that is, they have passed all satisfiability checks, and are part of the
+ // current solution.
//
// The *selection type is mostly just a dumb data container; the solver
// itself is responsible for maintaining that invariant.
@@ -120,87 +132,100 @@
// removal.
unsel *unselected
- // Map of packages to ignore. This is derived by converting SolveArgs.Ignore
+ // Map of packages to ignore. Derived by converting SolveParameters.Ignore
// into a map during solver prep - which also, nicely, deduplicates it.
ig map[string]bool
- // A list of all the currently active versionQueues in the solver. The set
+ // A stack of all the currently active versionQueues in the solver. The set
// of projects represented here corresponds closely to what's in s.sel,
- // although s.sel will always contain the root project, and s.versions never
- // will.
- versions []*versionQueue // TODO rename to vq
+ // although s.sel will always contain the root project, and s.vqs never
+ // will. Also, s.vqs is only added to (or popped from during backtracking)
+ // when a new project is selected; it is untouched when new packages are
+ // added to an existing project.
+ vqs []*versionQueue
- // A map of the ProjectName (local names) that should be allowed to change
- chng map[ProjectName]struct{}
+ // A map of the ProjectRoot (local names) that should be allowed to change
+ chng map[ProjectRoot]struct{}
- // A map of the ProjectName (local names) that are currently selected, and
+ // A map of the ProjectRoot (local names) that are currently selected, and
// the network name to which they currently correspond.
- names map[ProjectName]string
+ names map[ProjectRoot]string
// A map of the names listed in the root's lock.
rlm map[ProjectIdentifier]LockedProject
// A normalized, copied version of the root manifest.
rm Manifest
+
+ // A normalized, copied version of the root lock.
+ rl Lock
}
-// A Solver is the main workhorse of vsolver: given a set of project inputs, it
+// A Solver is the main workhorse of gps: given a set of project inputs, it
// performs a constraint solving analysis to develop a complete Result that can
// be used as a lock file, and to populate a vendor directory.
type Solver interface {
HashInputs() ([]byte, error)
- Solve() (Result, error)
+ Solve() (Solution, error)
}
// Prepare readies a Solver for use.
//
-// This function reads and validates the provided SolveArgs and SolveOpts. If a
-// problem with the inputs is detected, an error is returned. Otherwise, a
-// Solver is returned, ready to hash and check inputs or perform a solving run.
-func Prepare(args SolveArgs, opts SolveOpts, sm SourceManager) (Solver, error) {
+// This function reads and validates the provided SolveParameters. If a problem
+// with the inputs is detected, an error is returned. Otherwise, a Solver is
+// returned, ready to hash and check inputs or perform a solving run.
+func Prepare(params SolveParameters, sm SourceManager) (Solver, error) {
// local overrides would need to be handled first.
- // TODO local overrides! heh
+ // TODO(sdboyer) local overrides! heh
- if args.Manifest == nil {
- return nil, badOptsFailure("Opts must include a manifest.")
+ if sm == nil {
+ return nil, badOptsFailure("must provide non-nil SourceManager")
}
- if args.Root == "" {
- return nil, badOptsFailure("Opts must specify a non-empty string for the project root directory. If cwd is desired, use \".\"")
+ if params.RootDir == "" {
+ return nil, badOptsFailure("params must specify a non-empty root directory")
}
- if args.Name == "" {
- return nil, badOptsFailure("Opts must include a project name. This should be the intended root import path of the project.")
+ if params.ImportRoot == "" {
+ return nil, badOptsFailure("params must include a non-empty import root")
}
- if opts.Trace && opts.TraceLogger == nil {
- return nil, badOptsFailure("Trace requested, but no logger provided.")
+ if params.Trace && params.TraceLogger == nil {
+ return nil, badOptsFailure("trace requested, but no logger provided")
+ }
+
+ if params.Manifest == nil {
+ params.Manifest = SimpleManifest{}
}
// Ensure the ignore map is at least initialized
ig := make(map[string]bool)
- if len(args.Ignore) > 0 {
- for _, pkg := range args.Ignore {
+ if len(params.Ignore) > 0 {
+ for _, pkg := range params.Ignore {
ig[pkg] = true
}
}
s := &solver{
- args: args,
- o: opts,
- ig: ig,
- b: &bridge{
- sm: sm,
- sortdown: opts.Downgrade,
- name: args.Name,
- root: args.Root,
- ignore: ig,
- vlists: make(map[ProjectName][]Version),
- },
- tl: opts.TraceLogger,
+ params: params,
+ ig: ig,
+ tl: params.TraceLogger,
+ }
+
+ // Set up the bridge and ensure the root dir is in good, working order
+ // before doing anything else. (This call is stubbed out in tests, via
+ // overriding mkBridge(), so we can run with virtual RootDir.)
+ s.b = mkBridge(s, sm)
+ err := s.b.verifyRootDir(s.params.RootDir)
+ if err != nil {
+ return nil, err
}
// Initialize maps
- s.chng = make(map[ProjectName]struct{})
+ s.chng = make(map[ProjectRoot]struct{})
s.rlm = make(map[ProjectIdentifier]LockedProject)
- s.names = make(map[ProjectName]string)
+ s.names = make(map[ProjectRoot]string)
+
+ for _, v := range s.params.ToChange {
+ s.chng[v] = struct{}{}
+ }
// Initialize stacks and queues
s.sel = &selection{
@@ -212,39 +237,31 @@
cmp: s.unselectedComparator,
}
+ // Prep safe, normalized versions of root manifest and lock data
+ s.rm = prepManifest(s.params.Manifest)
+ if s.params.Lock != nil {
+ for _, lp := range s.params.Lock.Projects() {
+ s.rlm[lp.Ident().normalize()] = lp
+ }
+
+ // Also keep a prepped one, mostly for the bridge. This is probably
+ // wasteful, but only minimally so, and yay symmetry
+ s.rl = prepLock(s.params.Lock)
+ }
+
return s, nil
}
// Solve attempts to find a dependency solution for the given project, as
-// represented by the SolveArgs and accompanying SolveOpts with which this
-// Solver was created.
+// represented by the SolveParameters with which this Solver was created.
//
-// This is the entry point to the main vsolver workhorse.
-func (s *solver) Solve() (Result, error) {
- // Ensure the root is in good, working order before doing anything else
- err := s.b.verifyRoot(s.args.Root)
- if err != nil {
- return nil, err
- }
-
- // Prep safe, normalized versions of root manifest and lock data
- s.rm = prepManifest(s.args.Manifest, s.args.Name)
-
- if s.args.Lock != nil {
- for _, lp := range s.args.Lock.Projects() {
- s.rlm[lp.Ident().normalize()] = lp
- }
- }
-
- for _, v := range s.o.ToChange {
- s.chng[v] = struct{}{}
- }
-
+// This is the entry point to the main gps workhorse.
+func (s *solver) Solve() (Solution, error) {
// Prime the queues with the root project
- err = s.selectRoot()
+ err := s.selectRoot()
if err != nil {
- // TODO this properly with errs, yar
- panic("couldn't select root, yikes")
+ // TODO(sdboyer) this properly with errs, yar
+ return nil, err
}
// Log initial step
@@ -256,7 +273,7 @@
return nil, err
}
- r := result{
+ r := solution{
att: s.attempts,
}
@@ -318,7 +335,7 @@
},
pl: bmi.pl,
})
- s.versions = append(s.versions, queue)
+ s.vqs = append(s.vqs, queue)
s.logSolve()
} else {
// We're just trying to add packages to an already-selected project.
@@ -340,7 +357,7 @@
pl: bmi.pl,
}
- s.logStart(bmi) // TODO different special start logger for this path
+ s.logStart(bmi) // TODO(sdboyer) different special start logger for this path
err := s.checkPackage(nawp)
if err != nil {
// Err means a failure somewhere down the line; try backtracking.
@@ -352,8 +369,8 @@
}
s.selectPackages(nawp)
// We don't add anything to the stack of version queues because the
- // backtracker knows not to popping the vqstack if it backtracks
- // across a package addition.
+ // backtracker knows not to pop the vqstack if it backtracks
+ // across a pure-package addition.
s.logSolve()
}
}
@@ -383,7 +400,7 @@
func (s *solver) selectRoot() error {
pa := atom{
id: ProjectIdentifier{
- LocalName: s.args.Name,
+ ProjectRoot: s.params.ImportRoot,
},
// This is a hack so that the root project doesn't have a nil version.
// It's sort of OK because the root never makes it out into the results.
@@ -410,28 +427,28 @@
}
// Push the root project onto the queue.
- // TODO maybe it'd just be better to skip this?
+ // TODO(sdboyer) maybe it'd just be better to skip this?
s.sel.pushSelection(a, true)
// If we're looking for root's deps, get it from opts and local root
// analysis, rather than having the sm do it
mdeps := append(s.rm.DependencyConstraints(), s.rm.TestDependencyConstraints()...)
- reach, err := s.b.computeRootReach()
- if err != nil {
- return err
- }
+
+ // Err is not possible at this point, as it could only come from
+ // listPackages(), which if we're here already succeeded for root
+ reach, _ := s.b.computeRootReach()
deps, err := s.intersectConstraintsWithImports(mdeps, reach)
if err != nil {
- // TODO this could well happen; handle it with a more graceful error
+ // TODO(sdboyer) this could well happen; handle it with a more graceful error
panic(fmt.Sprintf("shouldn't be possible %s", err))
}
for _, dep := range deps {
s.sel.pushDep(dependency{depender: pa, dep: dep})
// Add all to unselected queue
- s.names[dep.Ident.LocalName] = dep.Ident.netName()
- heap.Push(s.unsel, bimodalIdentifier{id: dep.Ident, pl: dep.pl})
+ s.names[dep.Ident.ProjectRoot] = dep.Ident.netName()
+ heap.Push(s.unsel, bimodalIdentifier{id: dep.Ident, pl: dep.pl, fromRoot: true})
}
return nil
@@ -440,7 +457,7 @@
func (s *solver) getImportsAndConstraintsOf(a atomWithPackages) ([]completeDep, error) {
var err error
- if s.rm.Name() == a.a.id.LocalName {
+ if s.params.ImportRoot == a.a.id.ProjectRoot {
panic("Should never need to recheck imports/constraints from root during solve")
}
@@ -456,22 +473,28 @@
return nil, err
}
- allex, err := ptree.ExternalReach(false, false, s.ig)
- if err != nil {
- return nil, err
- }
-
+ allex := ptree.ExternalReach(false, false, s.ig)
// Use a map to dedupe the unique external packages
exmap := make(map[string]struct{})
// Add the packages reached by the packages explicitly listed in the atom to
// the list
for _, pkg := range a.pl {
- if expkgs, exists := allex[pkg]; !exists {
- return nil, fmt.Errorf("package %s does not exist within project %s", pkg, a.a.id.errString())
- } else {
- for _, ex := range expkgs {
- exmap[ex] = struct{}{}
+ expkgs, exists := allex[pkg]
+ if !exists {
+ // missing package here *should* only happen if the target pkg was
+ // poisoned somehow - check the original ptree.
+ if perr, exists := ptree.Packages[pkg]; exists {
+ if perr.Err != nil {
+ return nil, fmt.Errorf("package %s has errors: %s", pkg, perr.Err)
+ }
+ return nil, fmt.Errorf("package %s depends on some other package within %s with errors", pkg, a.a.id.errString())
}
+ // Nope, it's actually not there. This shouldn't happen.
+ return nil, fmt.Errorf("package %s does not exist within project %s", pkg, a.a.id.errString())
+ }
+
+ for _, ex := range expkgs {
+ exmap[ex] = struct{}{}
}
}
@@ -483,7 +506,7 @@
}
deps := m.DependencyConstraints()
- // TODO add overrides here...if we impl the concept (which we should)
+ // TODO(sdboyer) add overrides here...if we impl the concept (which we should)
return s.intersectConstraintsWithImports(deps, reach)
}
@@ -492,22 +515,22 @@
// externally reached packages, and creates a []completeDep that is guaranteed
// to include all packages named by import reach, using constraints where they
// are available, or Any() where they are not.
-func (s *solver) intersectConstraintsWithImports(deps []ProjectDep, reach []string) ([]completeDep, error) {
+func (s *solver) intersectConstraintsWithImports(deps []ProjectConstraint, reach []string) ([]completeDep, error) {
// Create a radix tree with all the projects we know from the manifest
- // TODO make this smarter once we allow non-root inputs as 'projects'
+ // TODO(sdboyer) make this smarter once we allow non-root inputs as 'projects'
xt := radix.New()
for _, dep := range deps {
- xt.Insert(string(dep.Ident.LocalName), dep)
+ xt.Insert(string(dep.Ident.ProjectRoot), dep)
}
// Step through the reached packages; if they have prefix matches in
// the trie, assume (mostly) it's a correct correspondence.
- dmap := make(map[ProjectName]completeDep)
+ dmap := make(map[ProjectRoot]completeDep)
for _, rp := range reach {
// If it's a stdlib package, skip it.
- // TODO this just hardcodes us to the packages in tip - should we
+ // TODO(sdboyer) this just hardcodes us to the packages in tip - should we
// have go version magic here, too?
- if _, exists := stdlib[rp]; exists {
+ if stdlib[rp] {
continue
}
@@ -529,14 +552,14 @@
// Match is valid; put it in the dmap, either creating a new
// completeDep or appending it to the existing one for this base
// project/prefix.
- dep := idep.(ProjectDep)
- if cdep, exists := dmap[dep.Ident.LocalName]; exists {
+ dep := idep.(ProjectConstraint)
+ if cdep, exists := dmap[dep.Ident.ProjectRoot]; exists {
cdep.pl = append(cdep.pl, rp)
- dmap[dep.Ident.LocalName] = cdep
+ dmap[dep.Ident.ProjectRoot] = cdep
} else {
- dmap[dep.Ident.LocalName] = completeDep{
- ProjectDep: dep,
- pl: []string{rp},
+ dmap[dep.Ident.ProjectRoot] = completeDep{
+ ProjectConstraint: dep,
+ pl: []string{rp},
}
}
continue
@@ -551,9 +574,9 @@
}
// Still no matches; make a new completeDep with an open constraint
- pd := ProjectDep{
+ pd := ProjectConstraint{
Ident: ProjectIdentifier{
- LocalName: ProjectName(root.Base),
+ ProjectRoot: ProjectRoot(root.Base),
NetworkName: root.Base,
},
Constraint: Any(),
@@ -563,9 +586,9 @@
// project get caught by the prefix search
xt.Insert(root.Base, pd)
// And also put the complete dep into the dmap
- dmap[ProjectName(root.Base)] = completeDep{
- ProjectDep: pd,
- pl: []string{rp},
+ dmap[ProjectRoot(root.Base)] = completeDep{
+ ProjectConstraint: pd,
+ pl: []string{rp},
}
}
@@ -583,8 +606,8 @@
func (s *solver) createVersionQueue(bmi bimodalIdentifier) (*versionQueue, error) {
id := bmi.id
// If on the root package, there's no queue to make
- if id.LocalName == s.rm.Name() {
- return newVersionQueue(id, nilpa, s.b)
+ if s.params.ImportRoot == id.ProjectRoot {
+ return newVersionQueue(id, nil, nil, s.b)
}
exists, err := s.b.repoExists(id)
@@ -598,13 +621,13 @@
}
if exists {
// Project exists only in vendor (and in some manifest somewhere)
- // TODO mark this for special handling, somehow?
+ // TODO(sdboyer) mark this for special handling, somehow?
} else {
return nil, newSolveError(fmt.Sprintf("Project '%s' could not be located.", id), cannotResolve)
}
}
- lockv := nilpa
+ var lockv Version
if len(s.rlm) > 0 {
lockv, err = s.getLockVersionIfValid(id)
if err != nil {
@@ -614,13 +637,84 @@
}
}
- q, err := newVersionQueue(id, lockv, s.b)
+ var prefv Version
+ if bmi.fromRoot {
+ // If this bmi came from the root, then we want to search through things
+ // with a dependency on it in order to see if any have a lock that might
+ // express a prefv
+ //
+ // TODO(sdboyer) nested loop; prime candidate for a cache somewhere
+ for _, dep := range s.sel.getDependenciesOn(bmi.id) {
+ // Skip the root, of course
+ if s.params.ImportRoot == dep.depender.id.ProjectRoot {
+ continue
+ }
+
+ _, l, err := s.b.getProjectInfo(dep.depender)
+ if err != nil || l == nil {
+ // err being non-nil really shouldn't be possible, but the lock
+ // being nil is quite likely
+ continue
+ }
+
+ for _, lp := range l.Projects() {
+ if lp.Ident().eq(bmi.id) {
+ prefv = lp.Version()
+ }
+ }
+ }
+
+ // OTHER APPROACH - WRONG, BUT MAYBE USEFUL FOR REFERENCE?
+ // If this bmi came from the root, then we want to search the unselected
+ // queue to see if anything *else* wants this ident, in which case we
+ // pick up that prefv
+ //for _, bmi2 := range s.unsel.sl {
+ //// Take the first thing from the queue that's for the same ident,
+ //// and has a non-nil prefv
+ //if bmi.id.eq(bmi2.id) {
+ //if bmi2.prefv != nil {
+ //prefv = bmi2.prefv
+ //}
+ //}
+ //}
+
+ } else {
+ // Otherwise, just use the preferred version expressed in the bmi
+ prefv = bmi.prefv
+ }
+
+ q, err := newVersionQueue(id, lockv, prefv, s.b)
if err != nil {
- // TODO this particular err case needs to be improved to be ONLY for cases
+ // TODO(sdboyer) this particular err case needs to be improved to be ONLY for cases
// where there's absolutely nothing findable about a given project name
return nil, err
}
+ // Hack in support for revisions.
+ //
+ // By design, revs aren't returned from ListVersion(). Thus, if the dep in
+ // the bmi was has a rev constraint, it is (almost) guaranteed to fail, even
+ // if that rev does exist in the repo. So, detect a rev and push it into the
+ // vq here, instead.
+ //
+ // Happily, the solver maintains the invariant that constraints on a given
+ // ident cannot be incompatible, so we know that if we find one rev, then
+ // any other deps will have to also be on that rev (or Any).
+ //
+ // TODO(sdboyer) while this does work, it bypasses the interface-implied guarantees
+ // of the version queue, and is therefore not a great strategy for API
+ // coherency. Folding this in to a formal interface would be better.
+ switch tc := s.sel.getConstraint(bmi.id).(type) {
+ case Revision:
+ // We know this is the only thing that could possibly match, so put it
+ // in at the front - if it isn't there already.
+ if q.pi[0] != tc {
+ // Existence of the revision is guaranteed by checkRevisionExists().
+ q.pi = append([]Version{tc}, q.pi...)
+ }
+ }
+
+ // Having assembled the queue, search it for a valid version.
return q, s.findValidVersion(q, bmi.pl)
}
@@ -632,7 +726,8 @@
// parameter.
func (s *solver) findValidVersion(q *versionQueue, pl []string) error {
if nil == q.current() {
- // TODO this case shouldn't be reachable, but panic here as a canary
+ // this case should not be reachable, but reflects improper solver state
+ // if it is, so panic immediately
panic("version queue is empty, should not happen")
}
@@ -681,24 +776,24 @@
//
// If any of these three conditions are true (or if the id cannot be found in
// the root lock), then no atom will be returned.
-func (s *solver) getLockVersionIfValid(id ProjectIdentifier) (atom, error) {
+func (s *solver) getLockVersionIfValid(id ProjectIdentifier) (Version, error) {
// If the project is specifically marked for changes, then don't look for a
// locked version.
- if _, explicit := s.chng[id.LocalName]; explicit || s.o.ChangeAll {
+ if _, explicit := s.chng[id.ProjectRoot]; explicit || s.params.ChangeAll {
// For projects with an upstream or cache repository, it's safe to
// ignore what's in the lock, because there's presumably more versions
// to be found and attempted in the repository. If it's only in vendor,
// though, then we have to try to use what's in the lock, because that's
// the only version we'll be able to get.
if exist, _ := s.b.repoExists(id); exist {
- return nilpa, nil
+ return nil, nil
}
// However, if a change was *expressly* requested for something that
// exists only in vendor, then that guarantees we don't have enough
// information to complete a solution. In that case, error out.
if explicit {
- return nilpa, &missingSourceFailure{
+ return nil, &missingSourceFailure{
goal: id,
prob: "Cannot upgrade %s, as no source repository could be found.",
}
@@ -707,7 +802,7 @@
lp, exists := s.rlm[id]
if !exists {
- return nilpa, nil
+ return nil, nil
}
constraint := s.sel.getConstraint(id)
@@ -739,37 +834,34 @@
if !found {
s.logSolve("%s in root lock, but current constraints disallow it", id.errString())
- return nilpa, nil
+ return nil, nil
}
}
s.logSolve("using root lock's version of %s", id.errString())
- return atom{
- id: id,
- v: v,
- }, nil
+ return v, nil
}
// backtrack works backwards from the current failed solution to find the next
// solution to try.
func (s *solver) backtrack() bool {
- if len(s.versions) == 0 {
+ if len(s.vqs) == 0 {
// nothing to backtrack to
return false
}
for {
for {
- if len(s.versions) == 0 {
+ if len(s.vqs) == 0 {
// no more versions, nowhere further to backtrack
return false
}
- if s.versions[len(s.versions)-1].failed {
+ if s.vqs[len(s.vqs)-1].failed {
break
}
- s.versions, s.versions[len(s.versions)-1] = s.versions[:len(s.versions)-1], nil
+ s.vqs, s.vqs[len(s.vqs)-1] = s.vqs[:len(s.vqs)-1], nil
// Pop selections off until we get to a project.
var proj bool
@@ -779,7 +871,7 @@
}
// Grab the last versionQueue off the list of queues
- q := s.versions[len(s.versions)-1]
+ q := s.vqs[len(s.vqs)-1]
// Walk back to the next project
var awp atomWithPackages
var proj bool
@@ -793,7 +885,7 @@
}
// Advance the queue past the current version, which we know is bad
- // TODO is it feasible to make available the failure reason here?
+ // TODO(sdboyer) is it feasible to make available the failure reason here?
if q.advance(nil) == nil && !q.isExhausted() {
// Search for another acceptable version of this failed dep in its queue
if s.findValidVersion(q, awp.pl) == nil {
@@ -817,11 +909,11 @@
// No solution found; continue backtracking after popping the queue
// we just inspected off the list
// GC-friendly pop pointer elem in slice
- s.versions, s.versions[len(s.versions)-1] = s.versions[:len(s.versions)-1], nil
+ s.vqs, s.vqs[len(s.vqs)-1] = s.vqs[:len(s.vqs)-1], nil
}
// Backtracking was successful if loop ended before running out of versions
- if len(s.versions) == 0 {
+ if len(s.vqs) == 0 {
return false
}
s.attempts++
@@ -862,16 +954,6 @@
return false
}
- rname := s.rm.Name()
- // *always* put root project first
- // TODO wait, it shouldn't be possible to have root in here...?
- if iname.LocalName == rname {
- return true
- }
- if jname.LocalName == rname {
- return false
- }
-
_, ilock := s.rlm[iname]
_, jlock := s.rlm[jname]
@@ -889,8 +971,6 @@
// isn't locked by the root. And, because being locked by root is the only
// way avoid that call when making a version queue, we know we're gonna have
// to pay that cost anyway.
- //
- // TODO ...at least, 'til we allow 'preferred' versions via non-root locks
// We can safely ignore an err from ListVersions here because, if there is
// an actual problem, it'll be noted and handled somewhere else saner in the
@@ -916,14 +996,14 @@
}
func (s *solver) fail(id ProjectIdentifier) {
- // TODO does this need updating, now that we have non-project package
+ // TODO(sdboyer) does this need updating, now that we have non-project package
// selection?
// skip if the root project
- if s.rm.Name() != id.LocalName {
+ if s.params.ImportRoot != id.ProjectRoot {
// just look for the first (oldest) one; the backtracker will necessarily
// traverse through and pop off any earlier ones
- for _, vq := range s.versions {
+ for _, vq := range s.vqs {
if vq.id.eq(id) {
vq.failed = true
return
@@ -951,6 +1031,17 @@
panic(fmt.Sprintf("canary - shouldn't be possible %s", err))
}
+ // If this atom has a lock, pull it out so that we can potentially inject
+ // preferred versions into any bmis we enqueue
+ _, l, _ := s.b.getProjectInfo(a.a)
+ var lmap map[ProjectIdentifier]Version
+ if l != nil {
+ lmap = make(map[ProjectIdentifier]Version)
+ for _, lp := range l.Projects() {
+ lmap[lp.Ident()] = lp.Version()
+ }
+ }
+
for _, dep := range deps {
s.sel.pushDep(dependency{depender: a.a, dep: dep})
// Go through all the packages introduced on this dep, selecting only
@@ -965,11 +1056,18 @@
}
if len(newp) > 0 {
- heap.Push(s.unsel, bimodalIdentifier{id: dep.Ident, pl: newp})
+ bmi := bimodalIdentifier{
+ id: dep.Ident,
+ pl: newp,
+ // This puts in a preferred version if one's in the map, else
+ // drops in the zero value (nil)
+ prefv: lmap[dep.Ident],
+ }
+ heap.Push(s.unsel, bmi)
}
if s.sel.depperCount(dep.Ident) == 1 {
- s.names[dep.Ident.LocalName] = dep.Ident.netName()
+ s.names[dep.Ident.ProjectRoot] = dep.Ident.netName()
}
}
}
@@ -995,6 +1093,17 @@
panic(fmt.Sprintf("canary - shouldn't be possible %s", err))
}
+ // If this atom has a lock, pull it out so that we can potentially inject
+ // preferred versions into any bmis we enqueue
+ _, l, _ := s.b.getProjectInfo(a.a)
+ var lmap map[ProjectIdentifier]Version
+ if l != nil {
+ lmap = make(map[ProjectIdentifier]Version)
+ for _, lp := range l.Projects() {
+ lmap[lp.Ident()] = lp.Version()
+ }
+ }
+
for _, dep := range deps {
s.sel.pushDep(dependency{depender: a.a, dep: dep})
// Go through all the packages introduced on this dep, selecting only
@@ -1009,11 +1118,18 @@
}
if len(newp) > 0 {
- heap.Push(s.unsel, bimodalIdentifier{id: dep.Ident, pl: newp})
+ bmi := bimodalIdentifier{
+ id: dep.Ident,
+ pl: newp,
+ // This puts in a preferred version if one's in the map, else
+ // drops in the zero value (nil)
+ prefv: lmap[dep.Ident],
+ }
+ heap.Push(s.unsel, bmi)
}
if s.sel.depperCount(dep.Ident) == 1 {
- s.names[dep.Ident.LocalName] = dep.Ident.netName()
+ s.names[dep.Ident.ProjectRoot] = dep.Ident.netName()
}
}
}
@@ -1034,7 +1150,7 @@
// if no parents/importers, remove from unselected queue
if s.sel.depperCount(dep.Ident) == 0 {
- delete(s.names, dep.Ident.LocalName)
+ delete(s.names, dep.Ident.ProjectRoot)
s.unsel.remove(bimodalIdentifier{id: dep.Ident, pl: dep.pl})
}
}
@@ -1043,28 +1159,28 @@
}
func (s *solver) logStart(bmi bimodalIdentifier) {
- if !s.o.Trace {
+ if !s.params.Trace {
return
}
- prefix := strings.Repeat("| ", len(s.versions)+1)
- // TODO how...to list the packages in the limited space we have?
+ prefix := strings.Repeat("| ", len(s.vqs)+1)
+ // TODO(sdboyer) how...to list the packages in the limited space we have?
s.tl.Printf("%s\n", tracePrefix(fmt.Sprintf("? attempting %s (with %v packages)", bmi.id.errString(), len(bmi.pl)), prefix, prefix))
}
func (s *solver) logSolve(args ...interface{}) {
- if !s.o.Trace {
+ if !s.params.Trace {
return
}
- preflen := len(s.versions)
+ preflen := len(s.vqs)
var msg string
if len(args) == 0 {
// Generate message based on current solver state
- if len(s.versions) == 0 {
+ if len(s.vqs) == 0 {
msg = "✓ (root)"
} else {
- vq := s.versions[len(s.versions)-1]
+ vq := s.vqs[len(s.vqs)-1]
msg = fmt.Sprintf("✓ select %s at %s", vq.id.errString(), vq.current())
}
} else {
@@ -1076,10 +1192,10 @@
msg = tracePrefix(fmt.Sprintf(data, args[1:]), "| ", "| ")
case traceError:
// We got a special traceError, use its custom method
- msg = tracePrefix(data.traceString(), "| ", "x ")
+ msg = tracePrefix(data.traceString(), "| ", "✗ ")
case error:
// Regular error; still use the x leader but default Error() string
- msg = tracePrefix(data.Error(), "| ", "x ")
+ msg = tracePrefix(data.Error(), "| ", "✗ ")
default:
// panic here because this can *only* mean a stupid internal bug
panic("canary - must pass a string as first arg to logSolve, or no args at all")
@@ -1107,9 +1223,6 @@
func pa2lp(pa atom, pkgs map[string]struct{}) LockedProject {
lp := LockedProject{
pi: pa.id.normalize(), // shouldn't be necessary, but normalize just in case
- // path is unnecessary duplicate information now, but if we ever allow
- // nesting as a conflict resolution mechanism, it will become valuable
- path: string(pa.id.LocalName),
}
switch v := pa.v.(type) {
@@ -1125,7 +1238,7 @@
}
for pkg := range pkgs {
- lp.pkgs = append(lp.pkgs, strings.TrimPrefix(pkg, string(pa.id.LocalName)+string(os.PathSeparator)))
+ lp.pkgs = append(lp.pkgs, strings.TrimPrefix(pkg, string(pa.id.ProjectRoot)+string(os.PathSeparator)))
}
sort.Strings(lp.pkgs)
diff --git a/vendor/github.com/sdboyer/gps/source_manager.go b/vendor/github.com/sdboyer/gps/source_manager.go
new file mode 100644
index 0000000..86627a1
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/source_manager.go
@@ -0,0 +1,297 @@
+package gps
+
+import (
+ "encoding/json"
+ "fmt"
+ "go/build"
+ "os"
+ "path"
+
+ "github.com/Masterminds/vcs"
+)
+
+// A SourceManager is responsible for retrieving, managing, and interrogating
+// source repositories. Its primary purpose is to serve the needs of a Solver,
+// but it is handy for other purposes, as well.
+//
+// gps's built-in SourceManager, accessible via NewSourceManager(), is
+// intended to be generic and sufficient for any purpose. It provides some
+// additional semantics around the methods defined here.
+type SourceManager interface {
+ // RepoExists checks if a repository exists, either upstream or in the
+ // SourceManager's central repository cache.
+ RepoExists(ProjectRoot) (bool, error)
+
+ // ListVersions retrieves a list of the available versions for a given
+ // repository name.
+ ListVersions(ProjectRoot) ([]Version, error)
+
+ // RevisionPresentIn indicates whether the provided Version is present in
+ // the given repository.
+ RevisionPresentIn(ProjectRoot, Revision) (bool, error)
+
+ // ListPackages retrieves a tree of the Go packages at or below the provided
+ // import path, at the provided version.
+ ListPackages(ProjectRoot, Version) (PackageTree, error)
+
+ // GetProjectInfo returns manifest and lock information for the provided
+ // import path. gps currently requires that projects be rooted at their
+ // repository root, which means that this ProjectRoot must also be a
+ // repository root.
+ GetProjectInfo(ProjectRoot, Version) (Manifest, Lock, error)
+
+ // ExportProject writes out the tree of the provided import path, at the
+ // provided version, to the provided directory.
+ ExportProject(ProjectRoot, Version, string) error
+
+ // Release lets go of any locks held by the SourceManager.
+ Release()
+}
+
+// A ProjectAnalyzer is responsible for analyzing a path for Manifest and Lock
+// information. Tools relying on gps must implement one.
+type ProjectAnalyzer interface {
+ GetInfo(string, ProjectRoot) (Manifest, Lock, error)
+}
+
+// SourceMgr is the default SourceManager for gps.
+//
+// There's no (planned) reason why it would need to be reimplemented by other
+// tools; control via dependency injection is intended to be sufficient.
+type SourceMgr struct {
+ cachedir string
+ pms map[ProjectRoot]*pmState
+ an ProjectAnalyzer
+ ctx build.Context
+ //pme map[ProjectRoot]error
+}
+
+var _ SourceManager = &SourceMgr{}
+
+// Holds a projectManager, caches of the managed project's data, and information
+// about the freshness of those caches
+type pmState struct {
+ pm *projectManager
+ cf *os.File // handle for the cache file
+ vcur bool // indicates that we've called ListVersions()
+}
+
+// NewSourceManager produces an instance of gps's built-in SourceManager. It
+// takes a cache directory (where local instances of upstream repositories are
+// stored), a vendor directory for the project currently being worked on, and a
+// force flag indicating whether to overwrite the global cache lock file (if
+// present).
+//
+// The returned SourceManager aggressively caches information wherever possible.
+// It is recommended that, if tools need to do preliminary, work involving
+// upstream repository analysis prior to invoking a solve run, that they create
+// this SourceManager as early as possible and use it to their ends. That way,
+// the solver can benefit from any caches that may have already been warmed.
+//
+// gps's SourceManager is intended to be threadsafe (if it's not, please
+// file a bug!). It should certainly be safe to reuse from one solving run to
+// the next; however, the fact that it takes a basedir as an argument makes it
+// much less useful for simultaneous use by separate solvers operating on
+// different root projects. This architecture may change in the future.
+func NewSourceManager(an ProjectAnalyzer, cachedir string, force bool) (*SourceMgr, error) {
+ if an == nil {
+ return nil, fmt.Errorf("a ProjectAnalyzer must be provided to the SourceManager")
+ }
+
+ err := os.MkdirAll(cachedir, 0777)
+ if err != nil {
+ return nil, err
+ }
+
+ glpath := path.Join(cachedir, "sm.lock")
+ _, err = os.Stat(glpath)
+ if err == nil && !force {
+ return nil, fmt.Errorf("cache lock file %s exists - another process crashed or is still running?", glpath)
+ }
+
+ _, err = os.OpenFile(glpath, os.O_CREATE|os.O_RDONLY, 0700) // is 0700 sane for this purpose?
+ if err != nil {
+ return nil, fmt.Errorf("failed to create global cache lock file at %s with err %s", glpath, err)
+ }
+
+ ctx := build.Default
+ // Replace GOPATH with our cache dir
+ ctx.GOPATH = cachedir
+
+ return &SourceMgr{
+ cachedir: cachedir,
+ pms: make(map[ProjectRoot]*pmState),
+ ctx: ctx,
+ an: an,
+ }, nil
+}
+
+// Release lets go of any locks held by the SourceManager.
+func (sm *SourceMgr) Release() {
+ os.Remove(path.Join(sm.cachedir, "sm.lock"))
+}
+
+// GetProjectInfo returns manifest and lock information for the provided import
+// path. gps currently requires that projects be rooted at their repository
+// root, which means that this ProjectRoot must also be a repository root.
+//
+// The work of producing the manifest and lock information is delegated to the
+// injected ProjectAnalyzer.
+func (sm *SourceMgr) GetProjectInfo(n ProjectRoot, v Version) (Manifest, Lock, error) {
+ pmc, err := sm.getProjectManager(n)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ return pmc.pm.GetInfoAt(v)
+}
+
+// ListPackages retrieves a tree of the Go packages at or below the provided
+// import path, at the provided version.
+func (sm *SourceMgr) ListPackages(n ProjectRoot, v Version) (PackageTree, error) {
+ pmc, err := sm.getProjectManager(n)
+ if err != nil {
+ return PackageTree{}, err
+ }
+
+ return pmc.pm.ListPackages(v)
+}
+
+// ListVersions retrieves a list of the available versions for a given
+// repository name.
+//
+// The list is not sorted; while it may be returned in the order that the
+// underlying VCS reports version information, no guarantee is made. It is
+// expected that the caller either not care about order, or sort the result
+// themselves.
+//
+// This list is always retrieved from upstream; if upstream is not accessible
+// (network outage, access issues, or the resource actually went away), an error
+// will be returned.
+func (sm *SourceMgr) ListVersions(n ProjectRoot) ([]Version, error) {
+ pmc, err := sm.getProjectManager(n)
+ if err != nil {
+ // TODO(sdboyer) More-er proper-er errors
+ return nil, err
+ }
+
+ return pmc.pm.ListVersions()
+}
+
+// RevisionPresentIn indicates whether the provided Revision is present in the given
+// repository.
+func (sm *SourceMgr) RevisionPresentIn(n ProjectRoot, r Revision) (bool, error) {
+ pmc, err := sm.getProjectManager(n)
+ if err != nil {
+ // TODO(sdboyer) More-er proper-er errors
+ return false, err
+ }
+
+ return pmc.pm.RevisionPresentIn(r)
+}
+
+// RepoExists checks if a repository exists, either upstream or in the cache,
+// for the provided ProjectRoot.
+func (sm *SourceMgr) RepoExists(n ProjectRoot) (bool, error) {
+ pms, err := sm.getProjectManager(n)
+ if err != nil {
+ return false, err
+ }
+
+ return pms.pm.CheckExistence(existsInCache) || pms.pm.CheckExistence(existsUpstream), nil
+}
+
+// ExportProject writes out the tree of the provided import path, at the
+// provided version, to the provided directory.
+func (sm *SourceMgr) ExportProject(n ProjectRoot, v Version, to string) error {
+ pms, err := sm.getProjectManager(n)
+ if err != nil {
+ return err
+ }
+
+ return pms.pm.ExportVersionTo(v, to)
+}
+
+// getProjectManager gets the project manager for the given ProjectRoot.
+//
+// If no such manager yet exists, it attempts to create one.
+func (sm *SourceMgr) getProjectManager(n ProjectRoot) (*pmState, error) {
+ // Check pm cache and errcache first
+ if pm, exists := sm.pms[n]; exists {
+ return pm, nil
+ //} else if pme, errexists := sm.pme[name]; errexists {
+ //return nil, pme
+ }
+
+ repodir := path.Join(sm.cachedir, "src", string(n))
+ // TODO(sdboyer) be more robust about this
+ r, err := vcs.NewRepo("https://"+string(n), repodir)
+ if err != nil {
+ // TODO(sdboyer) be better
+ return nil, err
+ }
+ if !r.CheckLocal() {
+ // TODO(sdboyer) cloning the repo here puts it on a blocking, and possibly
+ // unnecessary path. defer it
+ err = r.Get()
+ if err != nil {
+ // TODO(sdboyer) be better
+ return nil, err
+ }
+ }
+
+ // Ensure cache dir exists
+ metadir := path.Join(sm.cachedir, "metadata", string(n))
+ err = os.MkdirAll(metadir, 0777)
+ if err != nil {
+ // TODO(sdboyer) be better
+ return nil, err
+ }
+
+ pms := &pmState{}
+ cpath := path.Join(metadir, "cache.json")
+ fi, err := os.Stat(cpath)
+ var dc *projectDataCache
+ if fi != nil {
+ pms.cf, err = os.OpenFile(cpath, os.O_RDWR, 0777)
+ if err != nil {
+ // TODO(sdboyer) be better
+ return nil, fmt.Errorf("Err on opening metadata cache file: %s", err)
+ }
+
+ err = json.NewDecoder(pms.cf).Decode(dc)
+ if err != nil {
+ // TODO(sdboyer) be better
+ return nil, fmt.Errorf("Err on JSON decoding metadata cache file: %s", err)
+ }
+ } else {
+ // TODO(sdboyer) commented this out for now, until we manage it correctly
+ //pms.cf, err = os.Create(cpath)
+ //if err != nil {
+ //// TODO(sdboyer) be better
+ //return nil, fmt.Errorf("Err on creating metadata cache file: %s", err)
+ //}
+
+ dc = &projectDataCache{
+ Infos: make(map[Revision]projectInfo),
+ Packages: make(map[Revision]PackageTree),
+ VMap: make(map[Version]Revision),
+ RMap: make(map[Revision][]Version),
+ }
+ }
+
+ pm := &projectManager{
+ n: n,
+ ctx: sm.ctx,
+ an: sm.an,
+ dc: dc,
+ crepo: &repo{
+ rpath: repodir,
+ r: r,
+ },
+ }
+
+ pms.pm = pm
+ sm.pms[n] = pms
+ return pms, nil
+}
diff --git a/vendor/github.com/sdboyer/gps/types.go b/vendor/github.com/sdboyer/gps/types.go
new file mode 100644
index 0000000..f720fa2
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/types.go
@@ -0,0 +1,195 @@
+package gps
+
+import (
+ "fmt"
+ "math/rand"
+ "strconv"
+)
+
+// ProjectRoot is the topmost import path in a tree of other import paths - the
+// root of the tree. In gps' current design, ProjectRoots have to correspond to
+// a repository root (mostly), but their real purpose is to identify the root
+// import path of a "project", logically encompassing all child packages.
+//
+// Projects are a crucial unit of operation in gps. Constraints are declared by
+// a project's manifest, and apply to all packages in a ProjectRoot's tree.
+// Solving itself mostly proceeds on a project-by-project basis.
+//
+// Aliasing string types is usually a bit of an anti-pattern. We do it here as a
+// means of clarifying API intent. This is important because Go's package
+// management domain has lots of different path-ish strings floating around:
+//
+// actual directories:
+// /home/sdboyer/go/src/github.com/sdboyer/gps/example
+// URLs:
+// https://github.com/sdboyer/gps
+// import paths:
+// github.com/sdboyer/gps/example
+// portions of import paths that refer to a package:
+// example
+// portions that could not possibly refer to anything sane:
+// github.com/sdboyer
+// portions that correspond to a repository root:
+// github.com/sdboyer/gps
+//
+// While not a panacea, defining ProjectRoot at least allows us to clearly
+// identify when one of these path-ish strings is *supposed* to have certain
+// semantics.
+type ProjectRoot string
+
+// A ProjectIdentifier is, more or less, the name of a dependency. It is related
+// to, but differs in two keys ways from, an import path.
+//
+// First, ProjectIdentifiers do not identify a single package. Rather, they
+// encompasses the whole tree of packages that exist at or below their
+// ProjectRoot. In gps' current design, this ProjectRoot must correspond to the
+// root of a repository, though this may not always be the case.
+//
+// Second, ProjectIdentifiers can optionally carry a NetworkName, which
+// identifies where the underlying source code can be located on the network.
+// These can be either a full URL, including protocol, or plain import paths.
+// So, these are all valid data for NetworkName:
+//
+// github.com/sdboyer/gps
+// github.com/fork/gps
+// git@github.com:sdboyer/gps
+// https://github.com/sdboyer/gps
+//
+// With plain import paths, network addresses are derived purely through an
+// algorithm. By having an explicit network name, it becomes possible to, for
+// example, transparently substitute a fork for an original upstream repository.
+//
+// Note that gps makes no guarantees about the actual import paths contained in
+// a repository aligning with ImportRoot. If tools, or their users, specify an
+// alternate NetworkName that contains a repository with incompatible internal
+// import paths, gps will fail. (gps does no import rewriting.)
+//
+// Also note that if different projects' manifests report a different
+// NetworkName for a given ImportRoot, it is a solve failure. Everyone has to
+// agree on where a given import path should be sourced from.
+//
+// If NetworkName is not explicitly set, gps will derive the network address from
+// the ImportRoot using a similar algorithm to that of the official go tooling.
+type ProjectIdentifier struct {
+ ProjectRoot ProjectRoot
+ NetworkName string
+}
+
+// A ProjectConstraint combines a ProjectIdentifier with a Constraint. It
+// indicates that, if packages contained in the ProjectIdentifier enter the
+// depgraph, they must do so at a version that is allowed by the Constraint.
+type ProjectConstraint struct {
+ Ident ProjectIdentifier
+ Constraint Constraint
+}
+
+func (i ProjectIdentifier) less(j ProjectIdentifier) bool {
+ if i.ProjectRoot < j.ProjectRoot {
+ return true
+ }
+ if j.ProjectRoot < i.ProjectRoot {
+ return false
+ }
+
+ return i.NetworkName < j.NetworkName
+}
+
+func (i ProjectIdentifier) eq(j ProjectIdentifier) bool {
+ if i.ProjectRoot != j.ProjectRoot {
+ return false
+ }
+ if i.NetworkName == j.NetworkName {
+ return true
+ }
+
+ if (i.NetworkName == "" && j.NetworkName == string(j.ProjectRoot)) ||
+ (j.NetworkName == "" && i.NetworkName == string(i.ProjectRoot)) {
+ return true
+ }
+
+ // TODO(sdboyer) attempt conversion to URL and compare base + path
+
+ return false
+}
+
+func (i ProjectIdentifier) netName() string {
+ if i.NetworkName == "" {
+ return string(i.ProjectRoot)
+ }
+ return i.NetworkName
+}
+
+func (i ProjectIdentifier) errString() string {
+ if i.NetworkName == "" || i.NetworkName == string(i.ProjectRoot) {
+ return string(i.ProjectRoot)
+ }
+ return fmt.Sprintf("%s (from %s)", i.ProjectRoot, i.NetworkName)
+}
+
+func (i ProjectIdentifier) normalize() ProjectIdentifier {
+ if i.NetworkName == "" {
+ i.NetworkName = string(i.ProjectRoot)
+ }
+
+ return i
+}
+
+// Package represents a Go package. It contains a subset of the information
+// go/build.Package does.
+type Package struct {
+ ImportPath, CommentPath string
+ Name string
+ Imports []string
+ TestImports []string
+}
+
+// bimodalIdentifiers are used to track work to be done in the unselected queue.
+// TODO(sdboyer) marker for root, to know to ignore prefv...or can we do unselected queue
+// sorting only?
+type bimodalIdentifier struct {
+ id ProjectIdentifier
+ // List of packages required within/under the ProjectIdentifier
+ pl []string
+ // prefv is used to indicate a 'preferred' version. This is expected to be
+ // derived from a dep's lock data, or else is empty.
+ prefv Version
+ // Indicates that the bmi came from the root project originally
+ fromRoot bool
+}
+
+type atom struct {
+ id ProjectIdentifier
+ v Version
+}
+
+// With a random revision and no name, collisions are...unlikely
+var nilpa = atom{
+ v: Revision(strconv.FormatInt(rand.Int63(), 36)),
+}
+
+type atomWithPackages struct {
+ a atom
+ pl []string
+}
+
+//type byImportPath []Package
+
+//func (s byImportPath) Len() int { return len(s) }
+//func (s byImportPath) Less(i, j int) bool { return s[i].ImportPath < s[j].ImportPath }
+//func (s byImportPath) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+
+// completeDep (name hopefully to change) provides the whole picture of a
+// dependency - the root (repo and project, since currently we assume the two
+// are the same) name, a constraint, and the actual packages needed that are
+// under that root.
+type completeDep struct {
+ // The base ProjectDep
+ ProjectConstraint
+ // The specific packages required from the ProjectDep
+ pl []string
+}
+
+type dependency struct {
+ depender atom
+ dep completeDep
+}
diff --git a/vendor/github.com/sdboyer/vsolver/version.go b/vendor/github.com/sdboyer/gps/version.go
similarity index 98%
rename from vendor/github.com/sdboyer/vsolver/version.go
rename to vendor/github.com/sdboyer/gps/version.go
index bb30631..57d37ec 100644
--- a/vendor/github.com/sdboyer/vsolver/version.go
+++ b/vendor/github.com/sdboyer/gps/version.go
@@ -1,15 +1,15 @@
-package vsolver
+package gps
import "github.com/Masterminds/semver"
-// Version represents one of the different types of versions used by vsolver.
+// Version represents one of the different types of versions used by gps.
//
// Version composes Constraint, because all versions can be used as a constraint
// (where they allow one, and only one, version - themselves), but constraints
// are not necessarily discrete versions.
//
// Version is an interface, but it contains private methods, which restricts it
-// to vsolver's own internal implementations. We do this for the confluence of
+// to gps's own internal implementations. We do this for the confluence of
// two reasons: the implementation of Versions is complete (there is no case in
// which we'd need other types), and the implementation relies on type magic
// under the hood, which would be unsafe to do if other dynamic types could be
diff --git a/vendor/github.com/sdboyer/gps/version_queue.go b/vendor/github.com/sdboyer/gps/version_queue.go
new file mode 100644
index 0000000..e74a1da
--- /dev/null
+++ b/vendor/github.com/sdboyer/gps/version_queue.go
@@ -0,0 +1,142 @@
+package gps
+
+import (
+ "fmt"
+ "strings"
+)
+
+type failedVersion struct {
+ v Version
+ f error
+}
+
+type versionQueue struct {
+ id ProjectIdentifier
+ pi []Version
+ lockv, prefv Version
+ fails []failedVersion
+ b sourceBridge
+ failed bool
+ allLoaded bool
+}
+
+func newVersionQueue(id ProjectIdentifier, lockv, prefv Version, b sourceBridge) (*versionQueue, error) {
+ vq := &versionQueue{
+ id: id,
+ b: b,
+ }
+
+ // Lock goes in first, if present
+ if lockv != nil {
+ vq.lockv = lockv
+ vq.pi = append(vq.pi, lockv)
+ }
+
+ // Preferred version next
+ if prefv != nil {
+ vq.prefv = prefv
+ vq.pi = append(vq.pi, prefv)
+ }
+
+ if len(vq.pi) == 0 {
+ var err error
+ vq.pi, err = vq.b.listVersions(vq.id)
+ if err != nil {
+ // TODO(sdboyer) pushing this error this early entails that we
+ // unconditionally deep scan (e.g. vendor), as well as hitting the
+ // network.
+ return nil, err
+ }
+ vq.allLoaded = true
+ }
+
+ return vq, nil
+}
+
+func (vq *versionQueue) current() Version {
+ if len(vq.pi) > 0 {
+ return vq.pi[0]
+ }
+
+ return nil
+}
+
+// advance moves the versionQueue forward to the next available version,
+// recording the failure that eliminated the current version.
+func (vq *versionQueue) advance(fail error) (err error) {
+ // Nothing in the queue means...nothing in the queue, nicely enough
+ if len(vq.pi) == 0 {
+ return
+ }
+
+ // Record the fail reason and pop the queue
+ vq.fails = append(vq.fails, failedVersion{
+ v: vq.pi[0],
+ f: fail,
+ })
+ vq.pi = vq.pi[1:]
+
+ // *now*, if the queue is empty, ensure all versions have been loaded
+ if len(vq.pi) == 0 {
+ if vq.allLoaded {
+ // This branch gets hit when the queue is first fully exhausted,
+ // after having been populated by ListVersions() on a previous
+ // advance()
+ return
+ }
+
+ vq.allLoaded = true
+ vq.pi, err = vq.b.listVersions(vq.id)
+ if err != nil {
+ return err
+ }
+
+ // search for and remove locked and pref versions
+ //
+ // could use the version comparator for binary search here to avoid
+ // O(n) each time...if it matters
+ for k, pi := range vq.pi {
+ if pi == vq.lockv || pi == vq.prefv {
+ // GC-safe deletion for slice w/pointer elements
+ vq.pi, vq.pi[len(vq.pi)-1] = append(vq.pi[:k], vq.pi[k+1:]...), nil
+ //vq.pi = append(vq.pi[:k], vq.pi[k+1:]...)
+ }
+ }
+
+ if len(vq.pi) == 0 {
+ // If listing versions added nothing (new), then return now
+ return
+ }
+ }
+
+ // We're finally sure that there's something in the queue. Remove the
+ // failure marker, as the current version may have failed, but the next one
+ // hasn't yet
+ vq.failed = false
+
+ // If all have been loaded and the queue is empty, we're definitely out
+ // of things to try. Return empty, though, because vq semantics dictate
+ // that we don't explicitly indicate the end of the queue here.
+ return
+}
+
+// isExhausted indicates whether or not the queue has definitely been exhausted,
+// in which case it will return true.
+//
+// It may return false negatives - suggesting that there is more in the queue
+// when a subsequent call to current() will be empty. Plan accordingly.
+func (vq *versionQueue) isExhausted() bool {
+ if !vq.allLoaded {
+ return false
+ }
+ return len(vq.pi) == 0
+}
+
+func (vq *versionQueue) String() string {
+ var vs []string
+
+ for _, v := range vq.pi {
+ vs = append(vs, v.String())
+ }
+ return fmt.Sprintf("[%s]", strings.Join(vs, ", "))
+}
diff --git a/vendor/github.com/sdboyer/vsolver/version_test.go b/vendor/github.com/sdboyer/gps/version_test.go
similarity index 99%
rename from vendor/github.com/sdboyer/vsolver/version_test.go
rename to vendor/github.com/sdboyer/gps/version_test.go
index 738f850..f8b9b89 100644
--- a/vendor/github.com/sdboyer/vsolver/version_test.go
+++ b/vendor/github.com/sdboyer/gps/version_test.go
@@ -1,4 +1,4 @@
-package vsolver
+package gps
import (
"sort"
diff --git a/vendor/github.com/sdboyer/vsolver/README.md b/vendor/github.com/sdboyer/vsolver/README.md
deleted file mode 100644
index 6126f29..0000000
--- a/vendor/github.com/sdboyer/vsolver/README.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# vsolver
-
-`vsolver` is a specialized [SAT
-solver](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem),
-designed as an engine for Go package management. The initial plan is
-integration into [glide](https://github.com/Masterminds/glide), but
-`vsolver` could be used by any tool interested in [fully
-solving](www.mancoosi.org/edos/manager/) [the package management
-problem](https://medium.com/@sdboyer/so-you-want-to-write-a-package-manager-4ae9c17d9527).
-
-**NOTE - `vsolver` isn’t ready yet, but it’s getting close.**
-
-The implementation is derived from the solver used in Dart's
-[pub](https://github.com/dart-lang/pub/tree/master/lib/src/solver)
-package management tool.
-
-## Assumptions
-
-Package management is far too complex to be assumption-less. `vsolver`
-tries to keep its assumptions to the minimum, supporting as many
-situations as is possible while still maintaining a predictable,
-well-formed system.
-
-* Go 1.6, or 1.5 with `GO15VENDOREXPERIMENT = 1` set. `vendor`
- directories are a requirement.
-* You don't manually change what's under `vendor/`. That’s tooling’s
- job.
-* A **project** concept, where projects comprise the set of Go packages
- in a rooted tree on the filesystem. By happy (not) accident, that
- rooted tree is exactly the same set of packages covered by a `vendor/`
- directory.
-* A manifest-and-lock approach to tracking project manifest data. The
- solver takes manifest (and, optionally, lock)-type data as inputs, and
- produces lock-type data as its output. Tools decide how to actually
- store this data, but these should generally be at the root of the
- project tree.
-
-Manifests? Locks? Eeew. Yes, we also think it'd be swell if we didn't need
-metadata files. We love the idea of Go packages as standalone, self-describing
-code. Unfortunately, the wheels come off that idea as soon as versioning and
-cross-project/repository dependencies happen. [Universe alignment is
-hard](https://medium.com/@sdboyer/so-you-want-to-write-a-package-manager-4ae9c17d9527);
-trying to intermix version information directly with the code would only make
-matters worse.
-
-## Arguments
-
-Some folks are against using a solver in Go. Even the concept is repellent.
-These are some of the arguments that are raised:
-
-> "It seems complicated, and idiomatic Go things are simple!"
-
-Complaining about this is shooting the messenger.
-
-Selecting acceptable versions out of a big dependency graph is a [boolean
-satisfiability](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem)
-(or SAT) problem: given all possible combinations of valid dependencies, we’re
-trying to find a set that satisfies all the mutual requirements. Obviously that
-requires version numbers lining up, but it can also (and `vsolver` will/does)
-enforce invariants like “no import cycles” and type compatibility between
-packages. All of those requirements must be rechecked *every time* we discovery
-and add a new project to the graph.
-
-SAT was one of the very first problems to be proven NP-complete. **OF COURSE
-IT’S COMPLICATED**. We didn’t make it that way. Truth is, though, solvers are
-an ideal way of tackling this kind of problem: it lets us walk the line between
-pretending like versions don’t exist (a la `go get`) and pretending like only
-one version of a dep could ever work, ever (most of the current community
-tools).
-
-> "(Tool X) uses a solver and I don't like that tool’s UX!"
-
-Sure, there are plenty of abstruse package managers relying on SAT
-solvers out there. But that doesn’t mean they ALL have to be confusing.
-`vsolver`’s algorithms are artisinally handcrafted with ❤️ for Go’s
-use case, and we are committed to making Go dependency management a
-grokkable process.
-
-## Features
-
-Yes, most people will probably find most of this list incomprehensible
-right now. We'll improve/add explanatory links as we go!
-
-* [x] [Passing bestiary of tests](https://github.com/sdboyer/vsolver/issues/1)
- brought over from dart
-* [x] Dependency constraints based on [SemVer](http://semver.org/),
- branches, and revisions. AKA, "all the ways you might depend on
- Go code now, but coherently organized."
-* [x] Define different network addresses for a given import path
-* [ ] Global project aliasing. This is a bit different than the previous.
-* [x] Bi-modal analysis (project-level and package-level)
-* [ ] Specific sub-package dependencies
-* [ ] Enforcing an acyclic project graph (mirroring the Go compiler's
- enforcement of an acyclic package import graph)
-* [ ] On-the-fly static analysis (e.g. for incompatibility assessment,
- type escaping)
-* [ ] Optional package duplication as a conflict resolution mechanism
-* [ ] Faaaast, enabled by aggressive caching of project metadata
-* [ ] Lock information parameterized by build tags (including, but not
- limited to, `GOOS`/`GOARCH`)
-* [ ] Non-repository root and nested manifest/lock pairs
-
-Note that these goals are not fixed - we may drop some as we continue
-working. Some are also probably out of scope for the solver itself,
-but still related to the solver's operation.
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/igmain/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/igmain/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/igmain/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/igmaint/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/igmaint/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/igmaint/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/a.go
deleted file mode 100644
index cf8d759..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/m1p/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package m1p
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/nest/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/nest/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/nest/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/nest/m1p/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/nest/m1p/a.go
deleted file mode 100644
index cf8d759..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/nest/m1p/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package m1p
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/ren/m1p/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/ren/m1p/a.go
deleted file mode 100644
index cf8d759..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/ren/m1p/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package m1p
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/ren/simple/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/ren/simple/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/ren/simple/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simple/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/simple/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/simple/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simpleallt/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/simpleallt/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/simpleallt/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simplet/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/simplet/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/simplet/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/simplext/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/simplext/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/simplext/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/twopkgs/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/twopkgs/a.go
deleted file mode 100644
index 921df11..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/twopkgs/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/m1p/a.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/m1p/a.go
deleted file mode 100644
index 181620f..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/m1p/a.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package m1p
-
-import (
- "sort"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- M = sort.Strings
- _ = vsolver.Solve
-)
diff --git a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/simple.go b/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/simple.go
deleted file mode 100644
index ed4a9c0..0000000
--- a/vendor/github.com/sdboyer/vsolver/_testdata/src/varied/simple/simple.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package simple
-
-import (
- "go/parser"
-
- "github.com/sdboyer/vsolver"
-)
-
-var (
- _ = parser.ParseFile
- S = vsolver.Prepare
-)
diff --git a/vendor/github.com/sdboyer/vsolver/analysis.go b/vendor/github.com/sdboyer/vsolver/analysis.go
deleted file mode 100644
index b91d2a5..0000000
--- a/vendor/github.com/sdboyer/vsolver/analysis.go
+++ /dev/null
@@ -1,760 +0,0 @@
-package vsolver
-
-import (
- "bytes"
- "fmt"
- "go/build"
- "io"
- "io/ioutil"
- "os"
- "path/filepath"
- "sort"
- "strings"
- "text/scanner"
-)
-
-var osList []string
-var archList []string
-var stdlib = make(map[string]struct{})
-
-const stdlibPkgs string = "archive archive/tar archive/zip bufio builtin bytes compress compress/bzip2 compress/flate compress/gzip compress/lzw compress/zlib container container/heap container/list container/ring context crypto crypto/aes crypto/cipher crypto/des crypto/dsa crypto/ecdsa crypto/elliptic crypto/hmac crypto/md5 crypto/rand crypto/rc4 crypto/rsa crypto/sha1 crypto/sha256 crypto/sha512 crypto/subtle crypto/tls crypto/x509 crypto/x509/pkix database database/sql database/sql/driver debug debug/dwarf debug/elf debug/gosym debug/macho debug/pe debug/plan9obj encoding encoding/ascii85 encoding/asn1 encoding/base32 encoding/base64 encoding/binary encoding/csv encoding/gob encoding/hex encoding/json encoding/pem encoding/xml errors expvar flag fmt go go/ast go/build go/constant go/doc go/format go/importer go/parser go/printer go/scanner go/token go/types hash hash/adler32 hash/crc32 hash/crc64 hash/fnv html html/template image image/color image/color/palette image/draw image/gif image/jpeg image/png index index/suffixarray io io/ioutil log log/syslog math math/big math/cmplx math/rand mime mime/multipart mime/quotedprintable net net/http net/http/cgi net/http/cookiejar net/http/fcgi net/http/httptest net/http/httputil net/http/pprof net/mail net/rpc net/rpc/jsonrpc net/smtp net/textproto net/url os os/exec os/signal os/user path path/filepath reflect regexp regexp/syntax runtime runtime/cgo runtime/debug runtime/msan runtime/pprof runtime/race runtime/trace sort strconv strings sync sync/atomic syscall testing testing/iotest testing/quick text text/scanner text/tabwriter text/template text/template/parse time unicode unicode/utf16 unicode/utf8 unsafe"
-
-func init() {
- // The supported systems are listed in
- // https://github.com/golang/go/blob/master/src/go/build/syslist.go
- // The lists are not exported so we need to duplicate them here.
- osListString := "android darwin dragonfly freebsd linux nacl netbsd openbsd plan9 solaris windows"
- osList = strings.Split(osListString, " ")
-
- archListString := "386 amd64 amd64p32 arm armbe arm64 arm64be ppc64 ppc64le mips mipsle mips64 mips64le mips64p32 mips64p32le ppc s390 s390x sparc sparc64"
- archList = strings.Split(archListString, " ")
-
- for _, pkg := range strings.Split(stdlibPkgs, " ") {
- stdlib[pkg] = struct{}{}
- }
-}
-
-// listPackages lists info for all packages at or below the provided fileRoot.
-//
-// Directories without any valid Go files are excluded. Directories with
-// multiple packages are excluded.
-//
-// The importRoot parameter is prepended to the relative path when determining
-// the import path for each package. The obvious case is for something typical,
-// like:
-//
-// fileRoot = "/home/user/go/src/github.com/foo/bar"
-// importRoot = "github.com/foo/bar"
-//
-// where the fileRoot and importRoot align. However, if you provide:
-//
-// fileRoot = "/home/user/workspace/path/to/repo"
-// importRoot = "github.com/foo/bar"
-//
-// then the root package at path/to/repo will be ascribed import path
-// "github.com/foo/bar", and its subpackage "baz" will be
-// "github.com/foo/bar/baz".
-//
-// A PackageTree is returned, which contains the ImportRoot and map of import path
-// to PackageOrErr - each path under the root that exists will have either a
-// Package, or an error describing why the directory is not a valid package.
-func listPackages(fileRoot, importRoot string) (PackageTree, error) {
- // Set up a build.ctx for parsing
- ctx := build.Default
- ctx.GOROOT = ""
- ctx.GOPATH = ""
- ctx.UseAllFiles = true
-
- ptree := PackageTree{
- ImportRoot: importRoot,
- Packages: make(map[string]PackageOrErr),
- }
-
- // mkfilter returns two funcs that can be injected into a
- // build.Context, letting us filter the results into an "in" and "out" set.
- mkfilter := func(files map[string]struct{}) (in, out func(dir string) (fi []os.FileInfo, err error)) {
- in = func(dir string) (fi []os.FileInfo, err error) {
- all, err := ioutil.ReadDir(dir)
- if err != nil {
- return nil, err
- }
-
- for _, f := range all {
- if _, exists := files[f.Name()]; exists {
- fi = append(fi, f)
- }
- }
- return fi, nil
- }
-
- out = func(dir string) (fi []os.FileInfo, err error) {
- all, err := ioutil.ReadDir(dir)
- if err != nil {
- return nil, err
- }
-
- for _, f := range all {
- if _, exists := files[f.Name()]; !exists {
- fi = append(fi, f)
- }
- }
- return fi, nil
- }
-
- return
- }
-
- // helper func to create a Package from a *build.Package
- happy := func(importPath string, p *build.Package) Package {
- // Happy path - simple parsing worked
- pkg := Package{
- ImportPath: importPath,
- CommentPath: p.ImportComment,
- Name: p.Name,
- Imports: p.Imports,
- TestImports: dedupeStrings(p.TestImports, p.XTestImports),
- }
-
- return pkg
- }
-
- err := filepath.Walk(fileRoot, func(path string, fi os.FileInfo, err error) error {
- if err != nil && err != filepath.SkipDir {
- return err
- }
- if !fi.IsDir() {
- return nil
- }
-
- // Skip a few types of dirs
- if !localSrcDir(fi) {
- return filepath.SkipDir
- }
-
- // Compute the import path. Run the result through ToSlash(), so that windows
- // paths are normalized to Unix separators, as import paths are expected
- // to be.
- ip := filepath.ToSlash(filepath.Join(importRoot, strings.TrimPrefix(path, fileRoot)))
-
- // Find all the imports, across all os/arch combos
- p, err := ctx.ImportDir(path, analysisImportMode())
- var pkg Package
- if err == nil {
- pkg = happy(ip, p)
- } else {
- switch terr := err.(type) {
- case *build.NoGoError:
- ptree.Packages[ip] = PackageOrErr{
- Err: err,
- }
- return nil
- case *build.MultiplePackageError:
- // Set this up preemptively, so we can easily just return out if
- // something goes wrong. Otherwise, it'll get transparently
- // overwritten later.
- ptree.Packages[ip] = PackageOrErr{
- Err: err,
- }
-
- // For now, we're punting entirely on dealing with os/arch
- // combinations. That will be a more significant refactor.
- //
- // However, there is one case we want to allow here - a single
- // file, with "+build ignore", that's a main package. (Ignore is
- // just a convention, but for now it's good enough to just check
- // that.) This is a fairly common way to make a more
- // sophisticated build system than a Makefile allows, so we want
- // to support that case. So, transparently lump the deps
- // together.
- mains := make(map[string]struct{})
- for k, pkgname := range terr.Packages {
- if pkgname == "main" {
- tags, err2 := readFileBuildTags(filepath.Join(path, terr.Files[k]))
- if err2 != nil {
- return nil
- }
-
- var hasignore bool
- for _, t := range tags {
- if t == "ignore" {
- hasignore = true
- break
- }
- }
- if !hasignore {
- // No ignore tag found - bail out
- return nil
- }
- mains[terr.Files[k]] = struct{}{}
- }
- }
- // Make filtering funcs that will let us look only at the main
- // files, and exclude the main files; inf and outf, respectively
- inf, outf := mkfilter(mains)
-
- // outf first; if there's another err there, we bail out with a
- // return
- ctx.ReadDir = outf
- po, err2 := ctx.ImportDir(path, analysisImportMode())
- if err2 != nil {
- return nil
- }
- ctx.ReadDir = inf
- pi, err2 := ctx.ImportDir(path, analysisImportMode())
- if err2 != nil {
- return nil
- }
- ctx.ReadDir = nil
-
- // Use the other files as baseline, they're the main stuff
- pkg = happy(ip, po)
- mpkg := happy(ip, pi)
- pkg.Imports = dedupeStrings(pkg.Imports, mpkg.Imports)
- pkg.TestImports = dedupeStrings(pkg.TestImports, mpkg.TestImports)
- default:
- return err
- }
- }
-
- ptree.Packages[ip] = PackageOrErr{
- P: pkg,
- }
-
- return nil
- })
-
- if err != nil {
- return PackageTree{}, err
- }
-
- return ptree, nil
-}
-
-type wm struct {
- ex map[string]struct{}
- in map[string]struct{}
-}
-
-// wmToReach takes an externalReach()-style workmap and transitively walks all
-// internal imports until they reach an external path or terminate, then
-// translates the results into a slice of external imports for each internal
-// pkg.
-//
-// The basedir string, with a trailing slash ensured, will be stripped from the
-// keys of the returned map.
-func wmToReach(workmap map[string]wm, basedir string) (rm map[string][]string, err error) {
- // Just brute-force through the workmap, repeating until we make no
- // progress, either because no packages have any unresolved internal
- // packages left (in which case we're done), or because some packages can't
- // find something in the 'in' list (which shouldn't be possible)
- //
- // This implementation is hilariously inefficient in pure computational
- // complexity terms - worst case is some flavor of polynomial, versus O(n)
- // for the filesystem scan done in externalReach(). However, the coefficient
- // for filesystem access is so much larger than for memory twiddling that it
- // would probably take an absurdly large and snaky project to ever have that
- // worst-case polynomial growth supercede (or even become comparable to) the
- // linear side.
- //
- // But, if that day comes, we can improve this algorithm.
- rm = make(map[string][]string)
- var complete bool
- for !complete {
- var progress bool
- complete = true
-
- for pkg, w := range workmap {
- if len(w.in) == 0 {
- continue
- }
- complete = false
- // Each pass should always empty the original in list, but there
- // could be more in lists inherited from the other package
- // (transitive internal deps)
- for in := range w.in {
- if w2, exists := workmap[in]; !exists {
- return nil, fmt.Errorf("Should be impossible: %s depends on %s, but %s not in workmap", pkg, in, in)
- } else {
- progress = true
- delete(w.in, in)
-
- for i := range w2.ex {
- w.ex[i] = struct{}{}
- }
- for i := range w2.in {
- w.in[i] = struct{}{}
- }
- }
- }
- }
-
- if !complete && !progress {
- // Can't conceive of a way that we'd hit this, but this guards
- // against infinite loop
- panic("unreachable")
- }
- }
-
- // finally, transform to slice for return
- rm = make(map[string][]string)
- // ensure we have a version of the basedir w/trailing slash, for stripping
- rt := strings.TrimSuffix(basedir, string(os.PathSeparator)) + string(os.PathSeparator)
-
- for pkg, w := range workmap {
- if len(w.ex) == 0 {
- rm[strings.TrimPrefix(pkg, rt)] = nil
- continue
- }
-
- edeps := make([]string, len(w.ex))
- k := 0
- for opkg := range w.ex {
- edeps[k] = opkg
- k++
- }
-
- sort.Strings(edeps)
- rm[strings.TrimPrefix(pkg, rt)] = edeps
- }
-
- return rm, nil
-}
-
-func localSrcDir(fi os.FileInfo) bool {
- // Ignore _foo and .foo, and testdata
- name := fi.Name()
- if strings.HasPrefix(name, ".") || strings.HasPrefix(name, "_") || name == "testdata" {
- return false
- }
-
- // Ignore dirs that are expressly intended for non-project source
- switch name {
- case "vendor", "Godeps":
- return false
- default:
- return true
- }
-}
-
-func readBuildTags(p string) ([]string, error) {
- _, err := os.Stat(p)
- if err != nil {
- return []string{}, err
- }
-
- d, err := os.Open(p)
- if err != nil {
- return []string{}, err
- }
-
- objects, err := d.Readdir(-1)
- if err != nil {
- return []string{}, err
- }
-
- var tags []string
- for _, obj := range objects {
-
- // only process Go files
- if strings.HasSuffix(obj.Name(), ".go") {
- fp := filepath.Join(p, obj.Name())
-
- co, err := readGoContents(fp)
- if err != nil {
- return []string{}, err
- }
-
- // Only look at places where we had a code comment.
- if len(co) > 0 {
- t := findTags(co)
- for _, tg := range t {
- found := false
- for _, tt := range tags {
- if tt == tg {
- found = true
- }
- }
- if !found {
- tags = append(tags, tg)
- }
- }
- }
- }
- }
-
- return tags, nil
-}
-
-func readFileBuildTags(fp string) ([]string, error) {
- co, err := readGoContents(fp)
- if err != nil {
- return []string{}, err
- }
-
- var tags []string
- // Only look at places where we had a code comment.
- if len(co) > 0 {
- t := findTags(co)
- for _, tg := range t {
- found := false
- for _, tt := range tags {
- if tt == tg {
- found = true
- }
- }
- if !found {
- tags = append(tags, tg)
- }
- }
- }
-
- return tags, nil
-}
-
-// Read contents of a Go file up to the package declaration. This can be used
-// to find the the build tags.
-func readGoContents(fp string) ([]byte, error) {
- f, err := os.Open(fp)
- defer f.Close()
- if err != nil {
- return []byte{}, err
- }
-
- var s scanner.Scanner
- s.Init(f)
- var tok rune
- var pos scanner.Position
- for tok != scanner.EOF {
- tok = s.Scan()
-
- // Getting the token text will skip comments by default.
- tt := s.TokenText()
- // build tags will not be after the package declaration.
- if tt == "package" {
- pos = s.Position
- break
- }
- }
-
- var buf bytes.Buffer
- f.Seek(0, 0)
- _, err = io.CopyN(&buf, f, int64(pos.Offset))
- if err != nil {
- return []byte{}, err
- }
-
- return buf.Bytes(), nil
-}
-
-// From a byte slice of a Go file find the tags.
-func findTags(co []byte) []string {
- p := co
- var tgs []string
- for len(p) > 0 {
- line := p
- if i := bytes.IndexByte(line, '\n'); i >= 0 {
- line, p = line[:i], p[i+1:]
- } else {
- p = p[len(p):]
- }
- line = bytes.TrimSpace(line)
- // Only look at comment lines that are well formed in the Go style
- if bytes.HasPrefix(line, []byte("//")) {
- line = bytes.TrimSpace(line[len([]byte("//")):])
- if len(line) > 0 && line[0] == '+' {
- f := strings.Fields(string(line))
-
- // We've found a +build tag line.
- if f[0] == "+build" {
- for _, tg := range f[1:] {
- tgs = append(tgs, tg)
- }
- }
- }
- }
- }
-
- return tgs
-}
-
-// Get an OS value that's not the one passed in.
-func getOsValue(n string) string {
- for _, o := range osList {
- if o != n {
- return o
- }
- }
-
- return n
-}
-
-func isSupportedOs(n string) bool {
- for _, o := range osList {
- if o == n {
- return true
- }
- }
-
- return false
-}
-
-// Get an Arch value that's not the one passed in.
-func getArchValue(n string) string {
- for _, o := range archList {
- if o != n {
- return o
- }
- }
-
- return n
-}
-
-func isSupportedArch(n string) bool {
- for _, o := range archList {
- if o == n {
- return true
- }
- }
-
- return false
-}
-
-func ensureTrailingSlash(s string) string {
- return strings.TrimSuffix(s, string(os.PathSeparator)) + string(os.PathSeparator)
-}
-
-// helper func to merge, dedupe, and sort strings
-func dedupeStrings(s1, s2 []string) (r []string) {
- dedupe := make(map[string]bool)
-
- if len(s1) > 0 && len(s2) > 0 {
- for _, i := range s1 {
- dedupe[i] = true
- }
- for _, i := range s2 {
- dedupe[i] = true
- }
-
- for i := range dedupe {
- r = append(r, i)
- }
- // And then re-sort them
- sort.Strings(r)
- } else if len(s1) > 0 {
- r = s1
- } else if len(s2) > 0 {
- r = s2
- }
-
- return
-}
-
-// A PackageTree represents the results of recursively parsing a tree of
-// packages, starting at the ImportRoot. The results of parsing the files in the
-// directory identified by each import path - a Package or an error - are stored
-// in the Packages map, keyed by that import path.
-type PackageTree struct {
- ImportRoot string
- Packages map[string]PackageOrErr
-}
-
-// PackageOrErr stores the results of attempting to parse a single directory for
-// Go source code.
-type PackageOrErr struct {
- P Package
- Err error
-}
-
-// ExternalReach looks through a PackageTree and computes the list of external
-// packages (not logical children of PackageTree.ImportRoot) that are
-// transitively imported by the internal packages in the tree.
-//
-// main indicates whether (true) or not (false) to include main packages in the
-// analysis. main packages should generally be excluded when analyzing the
-// non-root dependency, as they inherently can't be imported.
-//
-// tests indicates whether (true) or not (false) to include imports from test
-// files in packages when computing the reach map.
-//
-// ignore is a map of import paths that, if encountered, should be excluded from
-// analysis. This exclusion applies to both internal and external packages. If
-// an external import path is ignored, it is simply omitted from the results.
-//
-// If an internal path is ignored, then it is excluded from all transitive
-// dependency chains and does not appear as a key in the final map. That is, if
-// you ignore A/foo, then the external package list for all internal packages
-// that import A/foo will not include external packages were only reachable
-// through A/foo.
-//
-// Visually, this means that, given a PackageTree with root A and packages at A,
-// A/foo, and A/bar, and the following import chain:
-//
-// A -> A/foo -> A/bar -> B/baz
-//
-// If you ignore A/foo, then the returned map would be:
-//
-// map[string][]string{
-// "A": []string{},
-// "A/bar": []string{"B/baz"},
-// }
-//
-// It is safe to pass a nil map if there are no packages to ignore.
-func (t PackageTree) ExternalReach(main, tests bool, ignore map[string]bool) (map[string][]string, error) {
- var someerrs bool
-
- if ignore == nil {
- ignore = make(map[string]bool)
- }
-
- // world's simplest adjacency list
- workmap := make(map[string]wm)
-
- var imps []string
- for ip, perr := range t.Packages {
- if perr.Err != nil {
- someerrs = true
- continue
- }
- p := perr.P
- // Skip main packages, unless param says otherwise
- if p.Name == "main" && !main {
- continue
- }
- // Skip ignored packages
- if ignore[ip] {
- continue
- }
-
- imps = imps[:0]
- imps = p.Imports
- if tests {
- imps = dedupeStrings(imps, p.TestImports)
- }
-
- w := wm{
- ex: make(map[string]struct{}),
- in: make(map[string]struct{}),
- }
-
- for _, imp := range imps {
- if ignore[imp] {
- continue
- }
-
- if !checkPrefixSlash(filepath.Clean(imp), t.ImportRoot) {
- w.ex[imp] = struct{}{}
- } else {
- if w2, seen := workmap[imp]; seen {
- for i := range w2.ex {
- w.ex[i] = struct{}{}
- }
- for i := range w2.in {
- w.in[i] = struct{}{}
- }
- } else {
- w.in[imp] = struct{}{}
- }
- }
- }
-
- workmap[ip] = w
- }
-
- if len(workmap) == 0 {
- if someerrs {
- // TODO proper errs
- return nil, fmt.Errorf("no packages without errors in %s", t.ImportRoot)
- }
- return nil, nil
- }
-
- //return wmToReach(workmap, t.ImportRoot)
- return wmToReach(workmap, "") // TODO this passes tests, but doesn't seem right
-}
-
-// ListExternalImports computes a sorted, deduplicated list of all the external
-// packages that are imported by all packages in the PackageTree.
-//
-// "External" is defined as anything not prefixed, after path cleaning, by the
-// PackageTree.ImportRoot. This includes stdlib.
-//
-// If an internal path is ignored, all of the external packages that it uniquely
-// imports are omitted. Note, however, that no internal transitivity checks are
-// made here - every non-ignored package in the tree is considered
-// independently. That means, given a PackageTree with root A and packages at A,
-// A/foo, and A/bar, and the following import chain:
-//
-// A -> A/foo -> A/bar -> B/baz
-//
-// If you ignore A or A/foo, A/bar will still be visited, and B/baz will be
-// returned, because this method visits ALL packages in the tree, not only those reachable
-// from the root (or any other) packages. If your use case requires interrogating
-// external imports with respect to only specific package entry points, you need
-// ExternalReach() instead.
-//
-// It is safe to pass a nil map if there are no packages to ignore.
-func (t PackageTree) ListExternalImports(main, tests bool, ignore map[string]bool) ([]string, error) {
- var someerrs bool
- exm := make(map[string]struct{})
-
- if ignore == nil {
- ignore = make(map[string]bool)
- }
-
- var imps []string
- for ip, perr := range t.Packages {
- if perr.Err != nil {
- someerrs = true
- continue
- }
-
- p := perr.P
- // Skip main packages, unless param says otherwise
- if p.Name == "main" && !main {
- continue
- }
- // Skip ignored packages
- if ignore[ip] {
- continue
- }
-
- imps = imps[:0]
- imps = p.Imports
- if tests {
- imps = dedupeStrings(imps, p.TestImports)
- }
-
- for _, imp := range imps {
- if !checkPrefixSlash(filepath.Clean(imp), t.ImportRoot) && !ignore[imp] {
- exm[imp] = struct{}{}
- }
- }
- }
-
- if len(exm) == 0 {
- if someerrs {
- // TODO proper errs
- return nil, fmt.Errorf("No packages without errors in %s", t.ImportRoot)
- }
- return nil, nil
- }
-
- ex := make([]string, len(exm))
- k := 0
- for p := range exm {
- ex[k] = p
- k++
- }
-
- sort.Strings(ex)
- return ex, nil
-}
-
-// checkPrefixSlash checks to see if the prefix is a prefix of the string as-is,
-// and that it is either equal OR the prefix + / is still a prefix.
-func checkPrefixSlash(s, prefix string) bool {
- if !strings.HasPrefix(s, prefix) {
- return false
- }
- return s == prefix || strings.HasPrefix(s, ensureTrailingSlash(prefix))
-}
diff --git a/vendor/github.com/sdboyer/vsolver/hash_test.go b/vendor/github.com/sdboyer/vsolver/hash_test.go
deleted file mode 100644
index 4bbb7d2..0000000
--- a/vendor/github.com/sdboyer/vsolver/hash_test.go
+++ /dev/null
@@ -1,45 +0,0 @@
-package vsolver
-
-import (
- "bytes"
- "crypto/sha256"
- "testing"
-)
-
-func TestHashInputs(t *testing.T) {
- fix := basicFixtures[2]
-
- args := SolveArgs{
- Root: string(fix.ds[0].Name()),
- Name: fix.ds[0].Name(),
- Manifest: fix.ds[0],
- Ignore: []string{"foo", "bar"},
- }
-
- // prep a fixture-overridden solver
- si, err := Prepare(args, SolveOpts{}, newdepspecSM(fix.ds, nil))
- s := si.(*solver)
- if err != nil {
- t.Fatalf("Could not prepare solver due to err: %s", err)
- }
-
- fixb := &depspecBridge{
- s.b.(*bridge),
- }
- s.b = fixb
-
- dig, err := s.HashInputs()
- if err != nil {
- t.Fatalf("HashInputs returned unexpected err: %s", err)
- }
-
- h := sha256.New()
- for _, v := range []string{"a", "a", "1.0.0", "b", "b", "1.0.0", stdlibPkgs, "root", "", "root", "a", "b", "bar", "foo"} {
- h.Write([]byte(v))
- }
- correct := h.Sum(nil)
-
- if !bytes.Equal(dig, correct) {
- t.Errorf("Hashes are not equal")
- }
-}
diff --git a/vendor/github.com/sdboyer/vsolver/manifest.go b/vendor/github.com/sdboyer/vsolver/manifest.go
deleted file mode 100644
index 51dac26..0000000
--- a/vendor/github.com/sdboyer/vsolver/manifest.go
+++ /dev/null
@@ -1,86 +0,0 @@
-package vsolver
-
-// Manifest represents the data from a manifest file (or however the
-// implementing tool chooses to store it) at a particular version that is
-// relevant to the satisfiability solving process. That means constraints on
-// dependencies, both for normal dependencies and for tests.
-//
-// Finding a solution that satisfies the constraints expressed by all of these
-// dependencies (and those from all other projects, transitively), is what the
-// solver does.
-//
-// Note that vsolver does perform static analysis on all projects' codebases;
-// if dependencies it finds through that analysis are missing from what the
-// Manifest lists, it is considered an error that will eliminate that version
-// from consideration in the solving algorithm.
-type Manifest interface {
- Name() ProjectName
- DependencyConstraints() []ProjectDep
- TestDependencyConstraints() []ProjectDep
-}
-
-// SimpleManifest is a helper for tools to enumerate manifest data. It's
-// generally intended for ephemeral manifests, such as those Analyzers create on
-// the fly for projects with no manifest metadata, or metadata through a foreign
-// tool's idioms.
-type SimpleManifest struct {
- N ProjectName
- Deps []ProjectDep
- TestDeps []ProjectDep
-}
-
-var _ Manifest = SimpleManifest{}
-
-// Name returns the name of the project described by the manifest.
-func (m SimpleManifest) Name() ProjectName {
- return m.N
-}
-
-// GetDependencies returns the project's dependencies.
-func (m SimpleManifest) DependencyConstraints() []ProjectDep {
- return m.Deps
-}
-
-// GetDependencies returns the project's test dependencies.
-func (m SimpleManifest) TestDependencyConstraints() []ProjectDep {
- return m.TestDeps
-}
-
-// prepManifest ensures a manifest is prepared and safe for use by the solver.
-// This entails two things:
-//
-// * Ensuring that all ProjectIdentifiers are normalized (otherwise matching
-// can get screwy and the queues go out of alignment)
-// * Defensively ensuring that no outside routine can modify the manifest while
-// the solver is in-flight.
-//
-// This is achieved by copying the manifest's data into a new SimpleManifest.
-func prepManifest(m Manifest, n ProjectName) Manifest {
- if m == nil {
- // Only use the provided ProjectName if making an empty manifest;
- // otherwise, we trust the input manifest.
- return SimpleManifest{
- N: n,
- }
- }
-
- deps := m.DependencyConstraints()
- ddeps := m.TestDependencyConstraints()
-
- rm := SimpleManifest{
- N: m.Name(),
- Deps: make([]ProjectDep, len(deps)),
- TestDeps: make([]ProjectDep, len(ddeps)),
- }
-
- for k, d := range deps {
- d.Ident = d.Ident.normalize()
- rm.Deps[k] = d
- }
- for k, d := range ddeps {
- d.Ident = d.Ident.normalize()
- rm.TestDeps[k] = d
- }
-
- return rm
-}
diff --git a/vendor/github.com/sdboyer/vsolver/solve_basic_test.go b/vendor/github.com/sdboyer/vsolver/solve_basic_test.go
deleted file mode 100644
index 910cd05..0000000
--- a/vendor/github.com/sdboyer/vsolver/solve_basic_test.go
+++ /dev/null
@@ -1,1279 +0,0 @@
-package vsolver
-
-import (
- "fmt"
- "regexp"
- "strings"
-
- "github.com/Masterminds/semver"
-)
-
-var regfrom = regexp.MustCompile(`^(\w*) from (\w*) ([0-9\.]*)`)
-
-// nsvSplit splits an "info" string on " " into the pair of name and
-// version/constraint, and returns each individually.
-//
-// This is for narrow use - panics if there are less than two resulting items in
-// the slice.
-func nsvSplit(info string) (id ProjectIdentifier, version string) {
- if strings.Contains(info, " from ") {
- parts := regfrom.FindStringSubmatch(info)
- info = parts[1] + " " + parts[3]
- id.NetworkName = parts[2]
- }
-
- s := strings.SplitN(info, " ", 2)
- if len(s) < 2 {
- panic(fmt.Sprintf("Malformed name/version info string '%s'", info))
- }
-
- id.LocalName, version = ProjectName(s[0]), s[1]
- if id.NetworkName == "" {
- id.NetworkName = string(id.LocalName)
- }
- return
-}
-
-// nsvrSplit splits an "info" string on " " into the triplet of name,
-// version/constraint, and revision, and returns each individually.
-//
-// It will work fine if only name and version/constraint are provided.
-//
-// This is for narrow use - panics if there are less than two resulting items in
-// the slice.
-func nsvrSplit(info string) (id ProjectIdentifier, version string, revision Revision) {
- if strings.Contains(info, " from ") {
- parts := regfrom.FindStringSubmatch(info)
- info = parts[1] + " " + parts[3]
- id.NetworkName = parts[2]
- }
-
- s := strings.SplitN(info, " ", 3)
- if len(s) < 2 {
- panic(fmt.Sprintf("Malformed name/version info string '%s'", info))
- }
-
- id.LocalName, version = ProjectName(s[0]), s[1]
- if id.NetworkName == "" {
- id.NetworkName = string(id.LocalName)
- }
-
- if len(s) == 3 {
- revision = Revision(s[2])
- }
- return
-}
-
-// mksvpa - "make semver project atom"
-//
-// Splits the input string on a space, and uses the first two elements as the
-// project name and constraint body, respectively.
-func mksvpa(info string) atom {
- id, ver, rev := nsvrSplit(info)
-
- _, err := semver.NewVersion(ver)
- if err != nil {
- // don't want to allow bad test data at this level, so just panic
- panic(fmt.Sprintf("Error when converting '%s' into semver: %s", ver, err))
- }
-
- var v Version
- v = NewVersion(ver)
- if rev != "" {
- v = v.(UnpairedVersion).Is(rev)
- }
-
- return atom{
- id: id,
- v: v,
- }
-}
-
-// mkc - "make constraint"
-func mkc(body string) Constraint {
- c, err := NewSemverConstraint(body)
- if err != nil {
- // don't want bad test data at this level, so just panic
- panic(fmt.Sprintf("Error when converting '%s' into semver constraint: %s", body, err))
- }
-
- return c
-}
-
-// mksvd - "make semver dependency"
-//
-// Splits the input string on a space, and uses the first two elements as the
-// project name and constraint body, respectively.
-func mksvd(info string) ProjectDep {
- id, v := nsvSplit(info)
-
- return ProjectDep{
- Ident: id,
- Constraint: mkc(v),
- }
-}
-
-type depspec struct {
- n ProjectName
- v Version
- deps []ProjectDep
- devdeps []ProjectDep
- pkgs []tpkg
-}
-
-// dsv - "depspec semver" (make a semver depspec)
-//
-// Wraps up all the other semver-making-helper funcs to create a depspec with
-// both semver versions and constraints.
-//
-// As it assembles from the other shortcut methods, it'll panic if anything's
-// malformed.
-//
-// First string is broken out into the name/semver of the main package.
-func dsv(pi string, deps ...string) depspec {
- pa := mksvpa(pi)
- if string(pa.id.LocalName) != pa.id.NetworkName {
- panic("alternate source on self makes no sense")
- }
-
- ds := depspec{
- n: pa.id.LocalName,
- v: pa.v,
- }
-
- for _, dep := range deps {
- var sl *[]ProjectDep
- if strings.HasPrefix(dep, "(dev) ") {
- dep = strings.TrimPrefix(dep, "(dev) ")
- sl = &ds.devdeps
- } else {
- sl = &ds.deps
- }
-
- *sl = append(*sl, mksvd(dep))
- }
-
- return ds
-}
-
-// mklock makes a fixLock, suitable to act as a lock file
-func mklock(pairs ...string) fixLock {
- l := make(fixLock, 0)
- for _, s := range pairs {
- pa := mksvpa(s)
- l = append(l, NewLockedProject(pa.id.LocalName, pa.v, pa.id.netName(), "", nil))
- }
-
- return l
-}
-
-// mkrevlock makes a fixLock, suitable to act as a lock file, with only a name
-// and a rev
-func mkrevlock(pairs ...string) fixLock {
- l := make(fixLock, 0)
- for _, s := range pairs {
- pa := mksvpa(s)
- l = append(l, NewLockedProject(pa.id.LocalName, pa.v.(PairedVersion).Underlying(), pa.id.netName(), "", nil))
- }
-
- return l
-}
-
-// mkresults makes a result set
-func mkresults(pairs ...string) map[string]Version {
- m := make(map[string]Version)
- for _, pair := range pairs {
- name, ver, rev := nsvrSplit(pair)
-
- var v Version
- v = NewVersion(ver)
- if rev != "" {
- v = v.(UnpairedVersion).Is(rev)
- }
-
- m[string(name.LocalName)] = v
- }
-
- return m
-}
-
-// computeBasicReachMap takes a depspec and computes a reach map which is
-// identical to the explicit depgraph.
-//
-// Using a reachMap here is overkill for what the basic fixtures actually need,
-// but we use it anyway for congruence with the more general cases.
-func computeBasicReachMap(ds []depspec) reachMap {
- rm := make(reachMap)
-
- for k, d := range ds {
- n := string(d.n)
- lm := map[string][]string{
- n: nil,
- }
- v := d.v
- if k == 0 {
- // Put the root in with a nil rev, to accommodate the solver
- v = nil
- }
- rm[pident{n: d.n, v: v}] = lm
-
- for _, dep := range d.deps {
- lm[n] = append(lm[n], string(dep.Ident.LocalName))
- }
-
- // first is root
- if k == 0 {
- for _, dep := range d.devdeps {
- lm[n] = append(lm[n], string(dep.Ident.LocalName))
- }
- }
- }
-
- return rm
-}
-
-type pident struct {
- n ProjectName
- v Version
-}
-
-type specfix interface {
- name() string
- specs() []depspec
- maxTries() int
- expectErrs() []string
- result() map[string]Version
-}
-
-type basicFixture struct {
- // name of this fixture datum
- n string
- // depspecs. always treat first as root
- ds []depspec
- // results; map of name/version pairs
- r map[string]Version
- // max attempts the solver should need to find solution. 0 means no limit
- maxAttempts int
- // Use downgrade instead of default upgrade sorter
- downgrade bool
- // lock file simulator, if one's to be used at all
- l fixLock
- // projects expected to have errors, if any
- errp []string
- // request up/downgrade to all projects
- changeall bool
-}
-
-func (f basicFixture) name() string {
- return f.n
-}
-
-func (f basicFixture) specs() []depspec {
- return f.ds
-}
-
-func (f basicFixture) maxTries() int {
- return f.maxAttempts
-}
-
-func (f basicFixture) expectErrs() []string {
- return f.errp
-}
-
-func (f basicFixture) result() map[string]Version {
- return f.r
-}
-
-var basicFixtures = []basicFixture{
- // basic fixtures
- {
- n: "no dependencies",
- ds: []depspec{
- dsv("root 0.0.0"),
- },
- r: mkresults(),
- },
- {
- n: "simple dependency tree",
- ds: []depspec{
- dsv("root 0.0.0", "a 1.0.0", "b 1.0.0"),
- dsv("a 1.0.0", "aa 1.0.0", "ab 1.0.0"),
- dsv("aa 1.0.0"),
- dsv("ab 1.0.0"),
- dsv("b 1.0.0", "ba 1.0.0", "bb 1.0.0"),
- dsv("ba 1.0.0"),
- dsv("bb 1.0.0"),
- },
- r: mkresults(
- "a 1.0.0",
- "aa 1.0.0",
- "ab 1.0.0",
- "b 1.0.0",
- "ba 1.0.0",
- "bb 1.0.0",
- ),
- },
- {
- n: "shared dependency with overlapping constraints",
- ds: []depspec{
- dsv("root 0.0.0", "a 1.0.0", "b 1.0.0"),
- dsv("a 1.0.0", "shared >=2.0.0, <4.0.0"),
- dsv("b 1.0.0", "shared >=3.0.0, <5.0.0"),
- dsv("shared 2.0.0"),
- dsv("shared 3.0.0"),
- dsv("shared 3.6.9"),
- dsv("shared 4.0.0"),
- dsv("shared 5.0.0"),
- },
- r: mkresults(
- "a 1.0.0",
- "b 1.0.0",
- "shared 3.6.9",
- ),
- },
- {
- n: "downgrade on overlapping constraints",
- ds: []depspec{
- dsv("root 0.0.0", "a 1.0.0", "b 1.0.0"),
- dsv("a 1.0.0", "shared >=2.0.0, <=4.0.0"),
- dsv("b 1.0.0", "shared >=3.0.0, <5.0.0"),
- dsv("shared 2.0.0"),
- dsv("shared 3.0.0"),
- dsv("shared 3.6.9"),
- dsv("shared 4.0.0"),
- dsv("shared 5.0.0"),
- },
- r: mkresults(
- "a 1.0.0",
- "b 1.0.0",
- "shared 3.0.0",
- ),
- downgrade: true,
- },
- {
- n: "shared dependency where dependent version in turn affects other dependencies",
- ds: []depspec{
- dsv("root 0.0.0", "foo <=1.0.2", "bar 1.0.0"),
- dsv("foo 1.0.0"),
- dsv("foo 1.0.1", "bang 1.0.0"),
- dsv("foo 1.0.2", "whoop 1.0.0"),
- dsv("foo 1.0.3", "zoop 1.0.0"),
- dsv("bar 1.0.0", "foo <=1.0.1"),
- dsv("bang 1.0.0"),
- dsv("whoop 1.0.0"),
- dsv("zoop 1.0.0"),
- },
- r: mkresults(
- "foo 1.0.1",
- "bar 1.0.0",
- "bang 1.0.0",
- ),
- },
- {
- n: "removed dependency",
- ds: []depspec{
- dsv("root 1.0.0", "foo 1.0.0", "bar *"),
- dsv("foo 1.0.0"),
- dsv("foo 2.0.0"),
- dsv("bar 1.0.0"),
- dsv("bar 2.0.0", "baz 1.0.0"),
- dsv("baz 1.0.0", "foo 2.0.0"),
- },
- r: mkresults(
- "foo 1.0.0",
- "bar 1.0.0",
- ),
- maxAttempts: 2,
- },
- {
- n: "with mismatched net addrs",
- ds: []depspec{
- dsv("root 1.0.0", "foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.0", "bar from baz 1.0.0"),
- dsv("bar 1.0.0"),
- },
- // TODO ugh; do real error comparison instead of shitty abstraction
- errp: []string{"foo", "foo", "root"},
- },
- // fixtures with locks
- {
- n: "with compatible locked dependency",
- ds: []depspec{
- dsv("root 0.0.0", "foo *"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1", "bar 1.0.1"),
- dsv("foo 1.0.2", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mklock(
- "foo 1.0.1",
- ),
- r: mkresults(
- "foo 1.0.1",
- "bar 1.0.1",
- ),
- },
- {
- n: "upgrade through lock",
- ds: []depspec{
- dsv("root 0.0.0", "foo *"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1", "bar 1.0.1"),
- dsv("foo 1.0.2", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mklock(
- "foo 1.0.1",
- ),
- r: mkresults(
- "foo 1.0.2",
- "bar 1.0.2",
- ),
- changeall: true,
- },
- {
- n: "downgrade through lock",
- ds: []depspec{
- dsv("root 0.0.0", "foo *"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1", "bar 1.0.1"),
- dsv("foo 1.0.2", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mklock(
- "foo 1.0.1",
- ),
- r: mkresults(
- "foo 1.0.0",
- "bar 1.0.0",
- ),
- changeall: true,
- downgrade: true,
- },
- {
- n: "with incompatible locked dependency",
- ds: []depspec{
- dsv("root 0.0.0", "foo >1.0.1"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1", "bar 1.0.1"),
- dsv("foo 1.0.2", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mklock(
- "foo 1.0.1",
- ),
- r: mkresults(
- "foo 1.0.2",
- "bar 1.0.2",
- ),
- },
- {
- n: "with unrelated locked dependency",
- ds: []depspec{
- dsv("root 0.0.0", "foo *"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1", "bar 1.0.1"),
- dsv("foo 1.0.2", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- dsv("baz 1.0.0 bazrev"),
- },
- l: mklock(
- "baz 1.0.0 bazrev",
- ),
- r: mkresults(
- "foo 1.0.2",
- "bar 1.0.2",
- ),
- },
- {
- n: "unlocks dependencies if necessary to ensure that a new dependency is satisfied",
- ds: []depspec{
- dsv("root 0.0.0", "foo *", "newdep *"),
- dsv("foo 1.0.0 foorev", "bar <2.0.0"),
- dsv("bar 1.0.0 barrev", "baz <2.0.0"),
- dsv("baz 1.0.0 bazrev", "qux <2.0.0"),
- dsv("qux 1.0.0 quxrev"),
- dsv("foo 2.0.0", "bar <3.0.0"),
- dsv("bar 2.0.0", "baz <3.0.0"),
- dsv("baz 2.0.0", "qux <3.0.0"),
- dsv("qux 2.0.0"),
- dsv("newdep 2.0.0", "baz >=1.5.0"),
- },
- l: mklock(
- "foo 1.0.0 foorev",
- "bar 1.0.0 barrev",
- "baz 1.0.0 bazrev",
- "qux 1.0.0 quxrev",
- ),
- r: mkresults(
- "foo 2.0.0",
- "bar 2.0.0",
- "baz 2.0.0",
- "qux 1.0.0 quxrev",
- "newdep 2.0.0",
- ),
- maxAttempts: 4,
- },
- {
- n: "locked atoms are matched on both local and net name",
- ds: []depspec{
- dsv("root 0.0.0", "foo *"),
- dsv("foo 1.0.0 foorev"),
- dsv("foo 2.0.0 foorev2"),
- },
- l: mklock(
- "foo from baz 1.0.0 foorev",
- ),
- r: mkresults(
- "foo 2.0.0 foorev2",
- ),
- },
- {
- n: "pairs bare revs in lock with versions",
- ds: []depspec{
- dsv("root 0.0.0", "foo ~1.0.1"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1 foorev", "bar 1.0.1"),
- dsv("foo 1.0.2", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mkrevlock(
- "foo 1.0.1 foorev", // mkrevlock drops the 1.0.1
- ),
- r: mkresults(
- "foo 1.0.1 foorev",
- "bar 1.0.1",
- ),
- },
- {
- n: "pairs bare revs in lock with all versions",
- ds: []depspec{
- dsv("root 0.0.0", "foo ~1.0.1"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1 foorev", "bar 1.0.1"),
- dsv("foo 1.0.2 foorev", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mkrevlock(
- "foo 1.0.1 foorev", // mkrevlock drops the 1.0.1
- ),
- r: mkresults(
- "foo 1.0.2 foorev",
- "bar 1.0.1",
- ),
- },
- {
- n: "does not pair bare revs in manifest with unpaired lock version",
- ds: []depspec{
- dsv("root 0.0.0", "foo ~1.0.1"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1 foorev", "bar 1.0.1"),
- dsv("foo 1.0.2", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mkrevlock(
- "foo 1.0.1 foorev", // mkrevlock drops the 1.0.1
- ),
- r: mkresults(
- "foo 1.0.1 foorev",
- "bar 1.0.1",
- ),
- },
- {
- n: "includes root package's dev dependencies",
- ds: []depspec{
- dsv("root 1.0.0", "(dev) foo 1.0.0", "(dev) bar 1.0.0"),
- dsv("foo 1.0.0"),
- dsv("bar 1.0.0"),
- },
- r: mkresults(
- "foo 1.0.0",
- "bar 1.0.0",
- ),
- },
- {
- n: "includes dev dependency's transitive dependencies",
- ds: []depspec{
- dsv("root 1.0.0", "(dev) foo 1.0.0"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("bar 1.0.0"),
- },
- r: mkresults(
- "foo 1.0.0",
- "bar 1.0.0",
- ),
- },
- {
- n: "ignores transitive dependency's dev dependencies",
- ds: []depspec{
- dsv("root 1.0.0", "(dev) foo 1.0.0"),
- dsv("foo 1.0.0", "(dev) bar 1.0.0"),
- dsv("bar 1.0.0"),
- },
- r: mkresults(
- "foo 1.0.0",
- ),
- },
- {
- n: "no version that matches requirement",
- ds: []depspec{
- dsv("root 0.0.0", "foo >=1.0.0, <2.0.0"),
- dsv("foo 2.0.0"),
- dsv("foo 2.1.3"),
- },
- errp: []string{"foo", "root"},
- },
- {
- n: "no version that matches combined constraint",
- ds: []depspec{
- dsv("root 0.0.0", "foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.0", "shared >=2.0.0, <3.0.0"),
- dsv("bar 1.0.0", "shared >=2.9.0, <4.0.0"),
- dsv("shared 2.5.0"),
- dsv("shared 3.5.0"),
- },
- errp: []string{"shared", "foo", "bar"},
- },
- {
- n: "disjoint constraints",
- ds: []depspec{
- dsv("root 0.0.0", "foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.0", "shared <=2.0.0"),
- dsv("bar 1.0.0", "shared >3.0.0"),
- dsv("shared 2.0.0"),
- dsv("shared 4.0.0"),
- },
- //errp: []string{"shared", "foo", "bar"}, // dart's has this...
- errp: []string{"foo", "bar"},
- },
- {
- n: "no valid solution",
- ds: []depspec{
- dsv("root 0.0.0", "a *", "b *"),
- dsv("a 1.0.0", "b 1.0.0"),
- dsv("a 2.0.0", "b 2.0.0"),
- dsv("b 1.0.0", "a 2.0.0"),
- dsv("b 2.0.0", "a 1.0.0"),
- },
- errp: []string{"b", "a"},
- maxAttempts: 2,
- },
- {
- n: "no version that matches while backtracking",
- ds: []depspec{
- dsv("root 0.0.0", "a *", "b >1.0.0"),
- dsv("a 1.0.0"),
- dsv("b 1.0.0"),
- },
- errp: []string{"b", "root"},
- },
- {
- // The latest versions of a and b disagree on c. An older version of either
- // will resolve the problem. This test validates that b, which is farther
- // in the dependency graph from myapp is downgraded first.
- n: "rolls back leaf versions first",
- ds: []depspec{
- dsv("root 0.0.0", "a *"),
- dsv("a 1.0.0", "b *"),
- dsv("a 2.0.0", "b *", "c 2.0.0"),
- dsv("b 1.0.0"),
- dsv("b 2.0.0", "c 1.0.0"),
- dsv("c 1.0.0"),
- dsv("c 2.0.0"),
- },
- r: mkresults(
- "a 2.0.0",
- "b 1.0.0",
- "c 2.0.0",
- ),
- maxAttempts: 2,
- },
- {
- // Only one version of baz, so foo and bar will have to downgrade until they
- // reach it.
- n: "simple transitive",
- ds: []depspec{
- dsv("root 0.0.0", "foo *"),
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 2.0.0", "bar 2.0.0"),
- dsv("foo 3.0.0", "bar 3.0.0"),
- dsv("bar 1.0.0", "baz *"),
- dsv("bar 2.0.0", "baz 2.0.0"),
- dsv("bar 3.0.0", "baz 3.0.0"),
- dsv("baz 1.0.0"),
- },
- r: mkresults(
- "foo 1.0.0",
- "bar 1.0.0",
- "baz 1.0.0",
- ),
- maxAttempts: 3,
- },
- {
- // Ensures the solver doesn"t exhaustively search all versions of b when
- // it's a-2.0.0 whose dependency on c-2.0.0-nonexistent led to the
- // problem. We make sure b has more versions than a so that the solver
- // tries a first since it sorts sibling dependencies by number of
- // versions.
- n: "simple transitive",
- ds: []depspec{
- dsv("root 0.0.0", "a *", "b *"),
- dsv("a 1.0.0", "c 1.0.0"),
- dsv("a 2.0.0", "c 2.0.0"),
- dsv("b 1.0.0"),
- dsv("b 2.0.0"),
- dsv("b 3.0.0"),
- dsv("c 1.0.0"),
- },
- r: mkresults(
- "a 1.0.0",
- "b 3.0.0",
- "c 1.0.0",
- ),
- maxAttempts: 2,
- },
- {
- // Dependencies are ordered so that packages with fewer versions are
- // tried first. Here, there are two valid solutions (either a or b must
- // be downgraded once). The chosen one depends on which dep is traversed
- // first. Since b has fewer versions, it will be traversed first, which
- // means a will come later. Since later selections are revised first, a
- // gets downgraded.
- n: "traverse into package with fewer versions first",
- ds: []depspec{
- dsv("root 0.0.0", "a *", "b *"),
- dsv("a 1.0.0", "c *"),
- dsv("a 2.0.0", "c *"),
- dsv("a 3.0.0", "c *"),
- dsv("a 4.0.0", "c *"),
- dsv("a 5.0.0", "c 1.0.0"),
- dsv("b 1.0.0", "c *"),
- dsv("b 2.0.0", "c *"),
- dsv("b 3.0.0", "c *"),
- dsv("b 4.0.0", "c 2.0.0"),
- dsv("c 1.0.0"),
- dsv("c 2.0.0"),
- },
- r: mkresults(
- "a 4.0.0",
- "b 4.0.0",
- "c 2.0.0",
- ),
- maxAttempts: 2,
- },
- {
- // This is similar to the preceding fixture. When getting the number of
- // versions of a package to determine which to traverse first, versions
- // that are disallowed by the root package"s constraints should not be
- // considered. Here, foo has more versions of bar in total (4), but
- // fewer that meet myapp"s constraints (only 2). There is no solution,
- // but we will do less backtracking if foo is tested first.
- n: "traverse into package with fewer versions first",
- ds: []depspec{
- dsv("root 0.0.0", "foo *", "bar *"),
- dsv("foo 1.0.0", "none 2.0.0"),
- dsv("foo 2.0.0", "none 2.0.0"),
- dsv("foo 3.0.0", "none 2.0.0"),
- dsv("foo 4.0.0", "none 2.0.0"),
- dsv("bar 1.0.0"),
- dsv("bar 2.0.0"),
- dsv("bar 3.0.0"),
- dsv("none 1.0.0"),
- },
- errp: []string{"none", "foo"},
- maxAttempts: 2,
- },
- {
- // If there"s a disjoint constraint on a package, then selecting other
- // versions of it is a waste of time: no possible versions can match. We
- // need to jump past it to the most recent package that affected the
- // constraint.
- n: "backjump past failed package on disjoint constraint",
- ds: []depspec{
- dsv("root 0.0.0", "a *", "foo *"),
- dsv("a 1.0.0", "foo *"),
- dsv("a 2.0.0", "foo <1.0.0"),
- dsv("foo 2.0.0"),
- dsv("foo 2.0.1"),
- dsv("foo 2.0.2"),
- dsv("foo 2.0.3"),
- dsv("foo 2.0.4"),
- dsv("none 1.0.0"),
- },
- r: mkresults(
- "a 1.0.0",
- "foo 2.0.4",
- ),
- maxAttempts: 2,
- },
- // TODO add fixture that tests proper handling of loops via aliases (where
- // a project that wouldn't be a loop is aliased to a project that is a loop)
-}
-
-func init() {
- // This sets up a hundred versions of foo and bar, 0.0.0 through 9.9.0. Each
- // version of foo depends on a baz with the same major version. Each version
- // of bar depends on a baz with the same minor version. There is only one
- // version of baz, 0.0.0, so only older versions of foo and bar will
- // satisfy it.
- fix := basicFixture{
- n: "complex backtrack",
- ds: []depspec{
- dsv("root 0.0.0", "foo *", "bar *"),
- dsv("baz 0.0.0"),
- },
- r: mkresults(
- "foo 0.9.0",
- "bar 9.0.0",
- "baz 0.0.0",
- ),
- maxAttempts: 10,
- }
-
- for i := 0; i < 10; i++ {
- for j := 0; j < 10; j++ {
- fix.ds = append(fix.ds, dsv(fmt.Sprintf("foo %v.%v.0", i, j), fmt.Sprintf("baz %v.0.0", i)))
- fix.ds = append(fix.ds, dsv(fmt.Sprintf("bar %v.%v.0", i, j), fmt.Sprintf("baz 0.%v.0", j)))
- }
- }
-
- basicFixtures = append(basicFixtures, fix)
-}
-
-// reachMaps contain externalReach()-type data for a given depspec fixture's
-// universe of proejcts, packages, and versions.
-type reachMap map[pident]map[string][]string
-
-type depspecSourceManager struct {
- specs []depspec
- rm reachMap
- ig map[string]bool
-}
-
-type fixSM interface {
- SourceManager
- rootSpec() depspec
- allSpecs() []depspec
- ignore() map[string]bool
-}
-
-var _ fixSM = &depspecSourceManager{}
-
-func newdepspecSM(ds []depspec, ignore []string) *depspecSourceManager {
- ig := make(map[string]bool)
- if len(ignore) > 0 {
- for _, pkg := range ignore {
- ig[pkg] = true
- }
- }
-
- return &depspecSourceManager{
- specs: ds,
- rm: computeBasicReachMap(ds),
- ig: ig,
- }
-}
-
-func (sm *depspecSourceManager) GetProjectInfo(n ProjectName, v Version) (Manifest, Lock, error) {
- for _, ds := range sm.specs {
- if n == ds.n && v.Matches(ds.v) {
- return ds, dummyLock{}, nil
- }
- }
-
- // TODO proper solver-type errors
- return nil, nil, fmt.Errorf("Project '%s' at version '%s' could not be found", n, v)
-}
-
-func (sm *depspecSourceManager) ExternalReach(n ProjectName, v Version) (map[string][]string, error) {
- id := pident{n: n, v: v}
- if m, exists := sm.rm[id]; exists {
- return m, nil
- }
- return nil, fmt.Errorf("No reach data for %s at version %s", n, v)
-}
-
-func (sm *depspecSourceManager) ListExternal(n ProjectName, v Version) ([]string, error) {
- // This should only be called for the root
- id := pident{n: n, v: v}
- if r, exists := sm.rm[id]; exists {
- return r[string(n)], nil
- }
- return nil, fmt.Errorf("No reach data for %s at version %s", n, v)
-}
-
-func (sm *depspecSourceManager) ListPackages(n ProjectName, v Version) (PackageTree, error) {
- id := pident{n: n, v: v}
- if r, exists := sm.rm[id]; exists {
- ptree := PackageTree{
- ImportRoot: string(n),
- Packages: map[string]PackageOrErr{
- string(n): PackageOrErr{
- P: Package{
- ImportPath: string(n),
- Name: string(n),
- Imports: r[string(n)],
- },
- },
- },
- }
- return ptree, nil
- }
-
- return PackageTree{}, fmt.Errorf("Project %s at version %s could not be found", n, v)
-}
-
-func (sm *depspecSourceManager) ListVersions(name ProjectName) (pi []Version, err error) {
- for _, ds := range sm.specs {
- if name == ds.n {
- pi = append(pi, ds.v)
- }
- }
-
- if len(pi) == 0 {
- err = fmt.Errorf("Project '%s' could not be found", name)
- }
-
- return
-}
-
-func (sm *depspecSourceManager) RepoExists(name ProjectName) (bool, error) {
- for _, ds := range sm.specs {
- if name == ds.n {
- return true, nil
- }
- }
-
- return false, nil
-}
-
-func (sm *depspecSourceManager) VendorCodeExists(name ProjectName) (bool, error) {
- return false, nil
-}
-
-func (sm *depspecSourceManager) Release() {}
-
-func (sm *depspecSourceManager) ExportProject(n ProjectName, v Version, to string) error {
- return fmt.Errorf("dummy sm doesn't support exporting")
-}
-
-func (sm *depspecSourceManager) rootSpec() depspec {
- return sm.specs[0]
-}
-
-func (sm *depspecSourceManager) allSpecs() []depspec {
- return sm.specs
-}
-
-func (sm *depspecSourceManager) ignore() map[string]bool {
- return sm.ig
-}
-
-type depspecBridge struct {
- *bridge
-}
-
-// override computeRootReach() on bridge to read directly out of the depspecs
-func (b *depspecBridge) computeRootReach() ([]string, error) {
- // This only gets called for the root project, so grab that one off the test
- // source manager
- dsm := b.sm.(fixSM)
- root := dsm.rootSpec()
-
- ptree, err := dsm.ListPackages(root.n, nil)
- if err != nil {
- return nil, err
- }
-
- return ptree.ListExternalImports(true, true, dsm.ignore())
-}
-
-// override verifyRoot() on bridge to prevent any filesystem interaction
-func (b *depspecBridge) verifyRoot(path string) error {
- root := b.sm.(fixSM).rootSpec()
- if string(root.n) != path {
- return fmt.Errorf("Expected only root project %q to computeRootReach(), got %q", root.n, path)
- }
-
- return nil
-}
-
-func (b *depspecBridge) listPackages(id ProjectIdentifier, v Version) (PackageTree, error) {
- return b.sm.(fixSM).ListPackages(b.key(id), v)
-}
-
-// override deduceRemoteRepo on bridge to make all our pkg/project mappings work
-// as expected
-func (b *depspecBridge) deduceRemoteRepo(path string) (*remoteRepo, error) {
- for _, ds := range b.sm.(fixSM).allSpecs() {
- n := string(ds.n)
- if path == n || strings.HasPrefix(path, n+"/") {
- return &remoteRepo{
- Base: n,
- RelPkg: strings.TrimPrefix(path, n+"/"),
- }, nil
- }
- }
- return nil, fmt.Errorf("Could not find %s, or any parent, in list of known fixtures", path)
-}
-
-// enforce interfaces
-var _ Manifest = depspec{}
-var _ Lock = dummyLock{}
-var _ Lock = fixLock{}
-
-// impl Spec interface
-func (ds depspec) DependencyConstraints() []ProjectDep {
- return ds.deps
-}
-
-// impl Spec interface
-func (ds depspec) TestDependencyConstraints() []ProjectDep {
- return ds.devdeps
-}
-
-// impl Spec interface
-func (ds depspec) Name() ProjectName {
- return ds.n
-}
-
-type fixLock []LockedProject
-
-func (fixLock) SolverVersion() string {
- return "-1"
-}
-
-// impl Lock interface
-func (fixLock) InputHash() []byte {
- return []byte("fooooorooooofooorooofoo")
-}
-
-// impl Lock interface
-func (l fixLock) Projects() []LockedProject {
- return l
-}
-
-type dummyLock struct{}
-
-// impl Lock interface
-func (_ dummyLock) SolverVersion() string {
- return "-1"
-}
-
-// impl Lock interface
-func (_ dummyLock) InputHash() []byte {
- return []byte("fooooorooooofooorooofoo")
-}
-
-// impl Lock interface
-func (_ dummyLock) Projects() []LockedProject {
- return nil
-}
-
-// We've borrowed this bestiary from pub's tests:
-// https://github.com/dart-lang/pub/blob/master/test/version_solver_test.dart
-
-// TODO finish converting all of these
-
-/*
-func basicGraph() {
- testResolve("circular dependency", {
- "myapp 1.0.0": {
- "foo": "1.0.0"
- },
- "foo 1.0.0": {
- "bar": "1.0.0"
- },
- "bar 1.0.0": {
- "foo": "1.0.0"
- }
- }, result: {
- "myapp from root": "1.0.0",
- "foo": "1.0.0",
- "bar": "1.0.0"
- });
-
-}
-
-func withLockFile() {
-
-}
-
-func rootDependency() {
- testResolve("with root source", {
- "myapp 1.0.0": {
- "foo": "1.0.0"
- },
- "foo 1.0.0": {
- "myapp from root": ">=1.0.0"
- }
- }, result: {
- "myapp from root": "1.0.0",
- "foo": "1.0.0"
- });
-
- testResolve("with different source", {
- "myapp 1.0.0": {
- "foo": "1.0.0"
- },
- "foo 1.0.0": {
- "myapp": ">=1.0.0"
- }
- }, result: {
- "myapp from root": "1.0.0",
- "foo": "1.0.0"
- });
-
- testResolve("with wrong version", {
- "myapp 1.0.0": {
- "foo": "1.0.0"
- },
- "foo 1.0.0": {
- "myapp": "<1.0.0"
- }
- }, error: couldNotSolve);
-}
-
-func unsolvable() {
-
- testResolve("mismatched descriptions", {
- "myapp 0.0.0": {
- "foo": "1.0.0",
- "bar": "1.0.0"
- },
- "foo 1.0.0": {
- "shared-x": "1.0.0"
- },
- "bar 1.0.0": {
- "shared-y": "1.0.0"
- },
- "shared-x 1.0.0": {},
- "shared-y 1.0.0": {}
- }, error: descriptionMismatch("shared", "foo", "bar"));
-
- testResolve("mismatched sources", {
- "myapp 0.0.0": {
- "foo": "1.0.0",
- "bar": "1.0.0"
- },
- "foo 1.0.0": {
- "shared": "1.0.0"
- },
- "bar 1.0.0": {
- "shared from mock2": "1.0.0"
- },
- "shared 1.0.0": {},
- "shared 1.0.0 from mock2": {}
- }, error: sourceMismatch("shared", "foo", "bar"));
-
-
-
- // This is a regression test for #18300.
- testResolve("...", {
- "myapp 0.0.0": {
- "angular": "any",
- "collection": "any"
- },
- "analyzer 0.12.2": {},
- "angular 0.10.0": {
- "di": ">=0.0.32 <0.1.0",
- "collection": ">=0.9.1 <1.0.0"
- },
- "angular 0.9.11": {
- "di": ">=0.0.32 <0.1.0",
- "collection": ">=0.9.1 <1.0.0"
- },
- "angular 0.9.10": {
- "di": ">=0.0.32 <0.1.0",
- "collection": ">=0.9.1 <1.0.0"
- },
- "collection 0.9.0": {},
- "collection 0.9.1": {},
- "di 0.0.37": {"analyzer": ">=0.13.0 <0.14.0"},
- "di 0.0.36": {"analyzer": ">=0.13.0 <0.14.0"}
- }, error: noVersion(["analyzer", "di"]), maxTries: 2);
-}
-
-func badSource() {
- testResolve("fail if the root package has a bad source in dep", {
- "myapp 0.0.0": {
- "foo from bad": "any"
- },
- }, error: unknownSource("myapp", "foo", "bad"));
-
- testResolve("fail if the root package has a bad source in dev dep", {
- "myapp 0.0.0": {
- "(dev) foo from bad": "any"
- },
- }, error: unknownSource("myapp", "foo", "bad"));
-
- testResolve("fail if all versions have bad source in dep", {
- "myapp 0.0.0": {
- "foo": "any"
- },
- "foo 1.0.0": {
- "bar from bad": "any"
- },
- "foo 1.0.1": {
- "baz from bad": "any"
- },
- "foo 1.0.3": {
- "bang from bad": "any"
- },
- }, error: unknownSource("foo", "bar", "bad"), maxTries: 3);
-
- testResolve("ignore versions with bad source in dep", {
- "myapp 1.0.0": {
- "foo": "any"
- },
- "foo 1.0.0": {
- "bar": "any"
- },
- "foo 1.0.1": {
- "bar from bad": "any"
- },
- "foo 1.0.3": {
- "bar from bad": "any"
- },
- "bar 1.0.0": {}
- }, result: {
- "myapp from root": "1.0.0",
- "foo": "1.0.0",
- "bar": "1.0.0"
- }, maxTries: 3);
-}
-
-func backtracking() {
- testResolve("circular dependency on older version", {
- "myapp 0.0.0": {
- "a": ">=1.0.0"
- },
- "a 1.0.0": {},
- "a 2.0.0": {
- "b": "1.0.0"
- },
- "b 1.0.0": {
- "a": "1.0.0"
- }
- }, result: {
- "myapp from root": "0.0.0",
- "a": "1.0.0"
- }, maxTries: 2);
-}
-*/
diff --git a/vendor/github.com/sdboyer/vsolver/solve_test.go b/vendor/github.com/sdboyer/vsolver/solve_test.go
deleted file mode 100644
index 5c54683..0000000
--- a/vendor/github.com/sdboyer/vsolver/solve_test.go
+++ /dev/null
@@ -1,388 +0,0 @@
-package vsolver
-
-import (
- "flag"
- "fmt"
- "io/ioutil"
- "log"
- "os"
- "reflect"
- "sort"
- "strings"
- "testing"
-)
-
-var fixtorun string
-
-// TODO regression test ensuring that locks with only revs for projects don't cause errors
-func init() {
- flag.StringVar(&fixtorun, "vsolver.fix", "", "A single fixture to run in TestBasicSolves")
-}
-
-var stderrlog = log.New(os.Stderr, "", 0)
-
-func fixSolve(args SolveArgs, o SolveOpts, sm SourceManager) (Result, error) {
- if testing.Verbose() {
- o.Trace = true
- o.TraceLogger = stderrlog
- }
-
- si, err := Prepare(args, o, sm)
- s := si.(*solver)
- if err != nil {
- return nil, err
- }
-
- fixb := &depspecBridge{
- s.b.(*bridge),
- }
- s.b = fixb
-
- return s.Solve()
-}
-
-// Test all the basic table fixtures.
-//
-// Or, just the one named in the fix arg.
-func TestBasicSolves(t *testing.T) {
- for _, fix := range basicFixtures {
- if fixtorun == "" || fixtorun == fix.n {
- solveBasicsAndCheck(fix, t)
- if testing.Verbose() {
- // insert a line break between tests
- stderrlog.Println("")
- }
- }
- }
-}
-
-func solveBasicsAndCheck(fix basicFixture, t *testing.T) (res Result, err error) {
- if testing.Verbose() {
- stderrlog.Printf("[[fixture %q]]", fix.n)
- }
- sm := newdepspecSM(fix.ds, nil)
-
- args := SolveArgs{
- Root: string(fix.ds[0].Name()),
- Name: ProjectName(fix.ds[0].Name()),
- Manifest: fix.ds[0],
- Lock: dummyLock{},
- }
-
- o := SolveOpts{
- Downgrade: fix.downgrade,
- ChangeAll: fix.changeall,
- }
-
- if fix.l != nil {
- args.Lock = fix.l
- }
-
- res, err = fixSolve(args, o, sm)
-
- return fixtureSolveSimpleChecks(fix, res, err, t)
-}
-
-// Test all the bimodal table fixtures.
-//
-// Or, just the one named in the fix arg.
-func TestBimodalSolves(t *testing.T) {
- if fixtorun != "" {
- if fix, exists := bimodalFixtures[fixtorun]; exists {
- solveBimodalAndCheck(fix, t)
- }
- } else {
- // sort them by their keys so we get stable output
- var names []string
- for n := range bimodalFixtures {
- names = append(names, n)
- }
-
- sort.Strings(names)
- for _, n := range names {
- solveBimodalAndCheck(bimodalFixtures[n], t)
- if testing.Verbose() {
- // insert a line break between tests
- stderrlog.Println("")
- }
- }
- }
-}
-
-func solveBimodalAndCheck(fix bimodalFixture, t *testing.T) (res Result, err error) {
- if testing.Verbose() {
- stderrlog.Printf("[[fixture %q]]", fix.n)
- }
- sm := newbmSM(fix.ds, fix.ignore)
-
- args := SolveArgs{
- Root: string(fix.ds[0].Name()),
- Name: ProjectName(fix.ds[0].Name()),
- Manifest: fix.ds[0],
- Lock: dummyLock{},
- Ignore: fix.ignore,
- }
-
- o := SolveOpts{
- Downgrade: fix.downgrade,
- ChangeAll: fix.changeall,
- }
-
- if fix.l != nil {
- args.Lock = fix.l
- }
-
- res, err = fixSolve(args, o, sm)
-
- return fixtureSolveSimpleChecks(fix, res, err, t)
-}
-
-func fixtureSolveSimpleChecks(fix specfix, res Result, err error, t *testing.T) (Result, error) {
- if err != nil {
- errp := fix.expectErrs()
- if len(errp) == 0 {
- t.Errorf("(fixture: %q) Solver failed; error was type %T, text: %q", fix.name(), err, err)
- return res, err
- }
-
- switch fail := err.(type) {
- case *badOptsFailure:
- t.Errorf("(fixture: %q) Unexpected bad opts failure solve error: %s", fix.name(), err)
- case *noVersionError:
- if errp[0] != string(fail.pn.LocalName) { // TODO identifierify
- t.Errorf("(fixture: %q) Expected failure on project %s, but was on project %s", fix.name(), fail.pn.LocalName, errp[0])
- }
-
- ep := make(map[string]struct{})
- for _, p := range errp[1:] {
- ep[p] = struct{}{}
- }
-
- found := make(map[string]struct{})
- for _, vf := range fail.fails {
- for _, f := range getFailureCausingProjects(vf.f) {
- found[f] = struct{}{}
- }
- }
-
- var missing []string
- var extra []string
- for p, _ := range found {
- if _, has := ep[p]; !has {
- extra = append(extra, p)
- }
- }
- if len(extra) > 0 {
- t.Errorf("(fixture: %q) Expected solve failures due to projects %s, but solve failures also arose from %s", fix.name(), strings.Join(errp[1:], ", "), strings.Join(extra, ", "))
- }
-
- for p, _ := range ep {
- if _, has := found[p]; !has {
- missing = append(missing, p)
- }
- }
- if len(missing) > 0 {
- t.Errorf("(fixture: %q) Expected solve failures due to projects %s, but %s had no failures", fix.name(), strings.Join(errp[1:], ", "), strings.Join(missing, ", "))
- }
-
- default:
- // TODO round these out
- panic(fmt.Sprintf("unhandled solve failure type: %s", err))
- }
- } else if len(fix.expectErrs()) > 0 {
- t.Errorf("(fixture: %q) Solver succeeded, but expected failure", fix.name())
- } else {
- r := res.(result)
- if fix.maxTries() > 0 && r.Attempts() > fix.maxTries() {
- t.Errorf("(fixture: %q) Solver completed in %v attempts, but expected %v or fewer", fix.name(), r.att, fix.maxTries())
- }
-
- // Dump result projects into a map for easier interrogation
- rp := make(map[string]Version)
- for _, p := range r.p {
- pa := p.toAtom()
- rp[string(pa.id.LocalName)] = pa.v
- }
-
- fixlen, rlen := len(fix.result()), len(rp)
- if fixlen != rlen {
- // Different length, so they definitely disagree
- t.Errorf("(fixture: %q) Solver reported %v package results, result expected %v", fix.name(), rlen, fixlen)
- }
-
- // Whether or not len is same, still have to verify that results agree
- // Walk through fixture/expected results first
- for p, v := range fix.result() {
- if av, exists := rp[p]; !exists {
- t.Errorf("(fixture: %q) Project %q expected but missing from results", fix.name(), p)
- } else {
- // delete result from map so we skip it on the reverse pass
- delete(rp, p)
- if v != av {
- t.Errorf("(fixture: %q) Expected version %q of project %q, but actual version was %q", fix.name(), v, p, av)
- }
- }
- }
-
- // Now walk through remaining actual results
- for p, v := range rp {
- if fv, exists := fix.result()[p]; !exists {
- t.Errorf("(fixture: %q) Unexpected project %q present in results", fix.name(), p)
- } else if v != fv {
- t.Errorf("(fixture: %q) Got version %q of project %q, but expected version was %q", fix.name(), v, p, fv)
- }
- }
- }
-
- return res, err
-}
-
-// This tests that, when a root lock is underspecified (has only a version) we
-// don't allow a match on that version from a rev in the manifest. We may allow
-// this in the future, but disallow it for now because going from an immutable
-// requirement to a mutable lock automagically is a bad direction that could
-// produce weird side effects.
-func TestRootLockNoVersionPairMatching(t *testing.T) {
- fix := basicFixture{
- n: "does not pair bare revs in manifest with unpaired lock version",
- ds: []depspec{
- dsv("root 0.0.0", "foo *"), // foo's constraint rewritten below to foorev
- dsv("foo 1.0.0", "bar 1.0.0"),
- dsv("foo 1.0.1 foorev", "bar 1.0.1"),
- dsv("foo 1.0.2 foorev", "bar 1.0.2"),
- dsv("bar 1.0.0"),
- dsv("bar 1.0.1"),
- dsv("bar 1.0.2"),
- },
- l: mklock(
- "foo 1.0.1",
- ),
- r: mkresults(
- "foo 1.0.2 foorev",
- "bar 1.0.1",
- ),
- }
-
- pd := fix.ds[0].deps[0]
- pd.Constraint = Revision("foorev")
- fix.ds[0].deps[0] = pd
-
- sm := newdepspecSM(fix.ds, nil)
-
- l2 := make(fixLock, 1)
- copy(l2, fix.l)
- l2[0].v = nil
-
- args := SolveArgs{
- Root: string(fix.ds[0].Name()),
- Name: ProjectName(fix.ds[0].Name()),
- Manifest: fix.ds[0],
- Lock: l2,
- }
-
- res, err := fixSolve(args, SolveOpts{}, sm)
-
- fixtureSolveSimpleChecks(fix, res, err, t)
-}
-
-func getFailureCausingProjects(err error) (projs []string) {
- switch e := err.(type) {
- case *noVersionError:
- projs = append(projs, string(e.pn.LocalName)) // TODO identifierify
- case *disjointConstraintFailure:
- for _, f := range e.failsib {
- projs = append(projs, string(f.depender.id.LocalName))
- }
- case *versionNotAllowedFailure:
- for _, f := range e.failparent {
- projs = append(projs, string(f.depender.id.LocalName))
- }
- case *constraintNotAllowedFailure:
- // No sane way of knowing why the currently selected version is
- // selected, so do nothing
- case *sourceMismatchFailure:
- projs = append(projs, string(e.prob.id.LocalName))
- for _, c := range e.sel {
- projs = append(projs, string(c.depender.id.LocalName))
- }
- case *checkeeHasProblemPackagesFailure:
- projs = append(projs, string(e.goal.id.LocalName))
- for _, errdep := range e.failpkg {
- for _, atom := range errdep.deppers {
- projs = append(projs, string(atom.id.LocalName))
- }
- }
- case *depHasProblemPackagesFailure:
- projs = append(projs, string(e.goal.depender.id.LocalName), string(e.goal.dep.Ident.LocalName))
- default:
- panic("unknown failtype")
- }
-
- return
-}
-
-func TestBadSolveOpts(t *testing.T) {
- sm := newdepspecSM(basicFixtures[0].ds, nil)
-
- o := SolveOpts{}
- args := SolveArgs{}
- _, err := Prepare(args, o, sm)
- if err == nil {
- t.Errorf("Should have errored on missing manifest")
- }
-
- m, _, _ := sm.GetProjectInfo(basicFixtures[0].ds[0].n, basicFixtures[0].ds[0].v)
- args.Manifest = m
- _, err = Prepare(args, o, sm)
- if err == nil {
- t.Errorf("Should have errored on empty root")
- }
-
- args.Root = "root"
- _, err = Prepare(args, o, sm)
- if err == nil {
- t.Errorf("Should have errored on empty name")
- }
-
- args.Name = "root"
- _, err = Prepare(args, o, sm)
- if err != nil {
- t.Errorf("Basic conditions satisfied, solve should have gone through, err was %s", err)
- }
-
- o.Trace = true
- _, err = Prepare(args, o, sm)
- if err == nil {
- t.Errorf("Should have errored on trace with no logger")
- }
-
- o.TraceLogger = log.New(ioutil.Discard, "", 0)
- _, err = Prepare(args, o, sm)
- if err != nil {
- t.Errorf("Basic conditions re-satisfied, solve should have gone through, err was %s", err)
- }
-}
-
-func TestIgnoreDedupe(t *testing.T) {
- fix := basicFixtures[0]
-
- ig := []string{"foo", "foo", "bar"}
- args := SolveArgs{
- Root: string(fix.ds[0].Name()),
- Name: ProjectName(fix.ds[0].Name()),
- Manifest: fix.ds[0],
- Ignore: ig,
- }
-
- s, _ := Prepare(args, SolveOpts{}, newdepspecSM(basicFixtures[0].ds, nil))
- ts := s.(*solver)
-
- expect := map[string]bool{
- "foo": true,
- "bar": true,
- }
-
- if !reflect.DeepEqual(ts.ig, expect) {
- t.Errorf("Expected solver's ignore list to be deduplicated map, got %s", ts.ig)
- }
-}
diff --git a/vendor/github.com/sdboyer/vsolver/source_manager.go b/vendor/github.com/sdboyer/vsolver/source_manager.go
deleted file mode 100644
index 3100b37..0000000
--- a/vendor/github.com/sdboyer/vsolver/source_manager.go
+++ /dev/null
@@ -1,307 +0,0 @@
-package vsolver
-
-import (
- "encoding/json"
- "fmt"
- "go/build"
- "os"
- "path"
-
- "github.com/Masterminds/vcs"
-)
-
-// A SourceManager is responsible for retrieving, managing, and interrogating
-// source repositories. Its primary purpose is to serve the needs of a Solver,
-// but it is handy for other purposes, as well.
-//
-// vsolver's built-in SourceManager, accessible via NewSourceManager(), is
-// intended to be generic and sufficient for any purpose. It provides some
-// additional semantics around the methods defined here.
-type SourceManager interface {
- // RepoExists checks if a repository exists, either upstream or in the
- // SourceManager's central repository cache.
- RepoExists(ProjectName) (bool, error)
-
- // VendorCodeExists checks if a code tree exists within the stored vendor
- // directory for the the provided import path name.
- VendorCodeExists(ProjectName) (bool, error)
-
- // ListVersions retrieves a list of the available versions for a given
- // repository name.
- ListVersions(ProjectName) ([]Version, error)
-
- // ListPackages retrieves a tree of the Go packages at or below the provided
- // import path, at the provided version.
- ListPackages(ProjectName, Version) (PackageTree, error)
-
- // GetProjectInfo returns manifest and lock information for the provided
- // import path. vsolver currently requires that projects be rooted at their
- // repository root, which means that this ProjectName must also be a
- // repository root.
- GetProjectInfo(ProjectName, Version) (Manifest, Lock, error)
-
- // ExportProject writes out the tree of the provided import path, at the
- // provided version, to the provided directory.
- ExportProject(ProjectName, Version, string) error
-
- // Release lets go of any locks held by the SourceManager.
- Release()
-}
-
-// A ProjectAnalyzer is responsible for analyzing a path for Manifest and Lock
-// information. Tools relying on vsolver must implement one.
-type ProjectAnalyzer interface {
- GetInfo(build.Context, ProjectName) (Manifest, Lock, error)
-}
-
-// ExistenceError is a specialized error type that, in addition to the standard
-// error interface, also indicates the amount of searching for a project's
-// existence that has been performed, and what level of existence has been
-// ascertained.
-//
-// ExistenceErrors should *only* be returned if the (lack of) existence of a
-// project was the underling cause of the error.
-//type ExistenceError interface {
-//error
-//Existence() (search ProjectExistence, found ProjectExistence)
-//}
-
-// sourceManager is the default SourceManager for vsolver.
-//
-// There's no (planned) reason why it would need to be reimplemented by other
-// tools; control via dependency injection is intended to be sufficient.
-type sourceManager struct {
- cachedir, basedir string
- pms map[ProjectName]*pmState
- an ProjectAnalyzer
- ctx build.Context
- //pme map[ProjectName]error
-}
-
-// Holds a projectManager, caches of the managed project's data, and information
-// about the freshness of those caches
-type pmState struct {
- pm *projectManager
- cf *os.File // handle for the cache file
- vcur bool // indicates that we've called ListVersions()
-}
-
-// NewSourceManager produces an instance of vsolver's built-in SourceManager. It
-// takes a cache directory (where local instances of upstream repositories are
-// stored), a base directory for the project currently being worked on, and a
-// force flag indicating whether to overwrite the global cache lock file (if
-// present).
-//
-// The returned SourceManager aggressively caches
-// information wherever possible. It is recommended that, if tools need to do preliminary,
-// work involving upstream repository analysis prior to invoking a solve run,
-// that they create this SourceManager as early as possible and use it to their
-// ends. That way, the solver can benefit from any caches that may have already
-// been warmed.
-//
-// vsolver's SourceManager is intended to be threadsafe (if it's not, please
-// file a bug!). It should certainly be safe to reuse from one solving run to
-// the next; however, the fact that it takes a basedir as an argument makes it
-// much less useful for simultaneous use by separate solvers operating on
-// different root projects. This architecture may change in the future.
-func NewSourceManager(an ProjectAnalyzer, cachedir, basedir string, force bool) (SourceManager, error) {
- if an == nil {
- return nil, fmt.Errorf("A ProjectAnalyzer must be provided to the SourceManager.")
- }
-
- err := os.MkdirAll(cachedir, 0777)
- if err != nil {
- return nil, err
- }
-
- glpath := path.Join(cachedir, "sm.lock")
- _, err = os.Stat(glpath)
- if err == nil && !force {
- return nil, fmt.Errorf("Another process has locked the cachedir, or crashed without cleaning itself properly. Pass force=true to override.")
- }
-
- _, err = os.OpenFile(glpath, os.O_CREATE|os.O_RDONLY, 0700) // is 0700 sane for this purpose?
- if err != nil {
- return nil, fmt.Errorf("Failed to create global cache lock file at %s with err %s", glpath, err)
- }
-
- ctx := build.Default
- // Replace GOPATH with our cache dir
- ctx.GOPATH = cachedir
-
- return &sourceManager{
- cachedir: cachedir,
- pms: make(map[ProjectName]*pmState),
- ctx: ctx,
- an: an,
- }, nil
-}
-
-// Release lets go of any locks held by the SourceManager.
-//
-// This will also call Flush(), which will write any relevant caches to disk.
-func (sm *sourceManager) Release() {
- os.Remove(path.Join(sm.cachedir, "sm.lock"))
-}
-
-// GetProjectInfo returns manifest and lock information for the provided import
-// path. vsolver currently requires that projects be rooted at their repository
-// root, which means that this ProjectName must also be a repository root.
-//
-// The work of producing the manifest and lock information is delegated to the
-// injected ProjectAnalyzer.
-func (sm *sourceManager) GetProjectInfo(n ProjectName, v Version) (Manifest, Lock, error) {
- pmc, err := sm.getProjectManager(n)
- if err != nil {
- return nil, nil, err
- }
-
- return pmc.pm.GetInfoAt(v)
-}
-
-// ListPackages retrieves a tree of the Go packages at or below the provided
-// import path, at the provided version.
-func (sm *sourceManager) ListPackages(n ProjectName, v Version) (PackageTree, error) {
- pmc, err := sm.getProjectManager(n)
- if err != nil {
- return PackageTree{}, err
- }
-
- return pmc.pm.ListPackages(v)
-}
-
-// ListVersions retrieves a list of the available versions for a given
-// repository name.
-//
-// The list is not sorted; while it may be retuend in the order that the
-// underlying VCS reports version information, no guarantee is made. It is
-// expected that the caller either not care about order, or sort the result
-// themselves.
-//
-// This list is always retrieved from upstream; if upstream is not accessible
-// (network outage, access issues, or the resource actually went away), an error
-// will be returned.
-func (sm *sourceManager) ListVersions(n ProjectName) ([]Version, error) {
- pmc, err := sm.getProjectManager(n)
- if err != nil {
- // TODO More-er proper-er errors
- return nil, err
- }
-
- return pmc.pm.ListVersions()
-}
-
-// VendorCodeExists checks if a code tree exists within the stored vendor
-// directory for the the provided import path name.
-func (sm *sourceManager) VendorCodeExists(n ProjectName) (bool, error) {
- pms, err := sm.getProjectManager(n)
- if err != nil {
- return false, err
- }
-
- return pms.pm.CheckExistence(existsInVendorRoot), nil
-}
-
-func (sm *sourceManager) RepoExists(n ProjectName) (bool, error) {
- pms, err := sm.getProjectManager(n)
- if err != nil {
- return false, err
- }
-
- return pms.pm.CheckExistence(existsInCache) || pms.pm.CheckExistence(existsUpstream), nil
-}
-
-// ExportProject writes out the tree of the provided import path, at the
-// provided version, to the provided directory.
-func (sm *sourceManager) ExportProject(n ProjectName, v Version, to string) error {
- pms, err := sm.getProjectManager(n)
- if err != nil {
- return err
- }
-
- return pms.pm.ExportVersionTo(v, to)
-}
-
-// getProjectManager gets the project manager for the given ProjectName.
-//
-// If no such manager yet exists, it attempts to create one.
-func (sm *sourceManager) getProjectManager(n ProjectName) (*pmState, error) {
- // Check pm cache and errcache first
- if pm, exists := sm.pms[n]; exists {
- return pm, nil
- //} else if pme, errexists := sm.pme[name]; errexists {
- //return nil, pme
- }
-
- repodir := path.Join(sm.cachedir, "src", string(n))
- // TODO be more robust about this
- r, err := vcs.NewRepo("https://"+string(n), repodir)
- if err != nil {
- // TODO be better
- return nil, err
- }
- if !r.CheckLocal() {
- // TODO cloning the repo here puts it on a blocking, and possibly
- // unnecessary path. defer it
- err = r.Get()
- if err != nil {
- // TODO be better
- return nil, err
- }
- }
-
- // Ensure cache dir exists
- metadir := path.Join(sm.cachedir, "metadata", string(n))
- err = os.MkdirAll(metadir, 0777)
- if err != nil {
- // TODO be better
- return nil, err
- }
-
- pms := &pmState{}
- cpath := path.Join(metadir, "cache.json")
- fi, err := os.Stat(cpath)
- var dc *projectDataCache
- if fi != nil {
- pms.cf, err = os.OpenFile(cpath, os.O_RDWR, 0777)
- if err != nil {
- // TODO be better
- return nil, fmt.Errorf("Err on opening metadata cache file: %s", err)
- }
-
- err = json.NewDecoder(pms.cf).Decode(dc)
- if err != nil {
- // TODO be better
- return nil, fmt.Errorf("Err on JSON decoding metadata cache file: %s", err)
- }
- } else {
- // TODO commented this out for now, until we manage it correctly
- //pms.cf, err = os.Create(cpath)
- //if err != nil {
- //// TODO be better
- //return nil, fmt.Errorf("Err on creating metadata cache file: %s", err)
- //}
-
- dc = &projectDataCache{
- Infos: make(map[Revision]projectInfo),
- VMap: make(map[Version]Revision),
- RMap: make(map[Revision][]Version),
- }
- }
-
- pm := &projectManager{
- n: n,
- ctx: sm.ctx,
- vendordir: sm.basedir + "/vendor",
- an: sm.an,
- dc: dc,
- crepo: &repo{
- rpath: repodir,
- r: r,
- },
- }
-
- pms.pm = pm
- sm.pms[n] = pms
- return pms, nil
-}
diff --git a/vendor/github.com/sdboyer/vsolver/types.go b/vendor/github.com/sdboyer/vsolver/types.go
deleted file mode 100644
index 0cb54e7..0000000
--- a/vendor/github.com/sdboyer/vsolver/types.go
+++ /dev/null
@@ -1,111 +0,0 @@
-package vsolver
-
-import "fmt"
-
-type ProjectIdentifier struct {
- LocalName ProjectName
- NetworkName string
-}
-
-func (i ProjectIdentifier) less(j ProjectIdentifier) bool {
- if i.LocalName < j.LocalName {
- return true
- }
- if j.LocalName < i.LocalName {
- return false
- }
-
- return i.NetworkName < j.NetworkName
-}
-
-func (i ProjectIdentifier) eq(j ProjectIdentifier) bool {
- if i.LocalName != j.LocalName {
- return false
- }
- if i.NetworkName == j.NetworkName {
- return true
- }
-
- if (i.NetworkName == "" && j.NetworkName == string(j.LocalName)) ||
- (j.NetworkName == "" && i.NetworkName == string(i.LocalName)) {
- return true
- }
-
- return false
-}
-
-func (i ProjectIdentifier) netName() string {
- if i.NetworkName == "" {
- return string(i.LocalName)
- }
- return i.NetworkName
-}
-
-func (i ProjectIdentifier) errString() string {
- if i.NetworkName == "" || i.NetworkName == string(i.LocalName) {
- return string(i.LocalName)
- }
- return fmt.Sprintf("%s (from %s)", i.LocalName, i.NetworkName)
-}
-
-func (i ProjectIdentifier) normalize() ProjectIdentifier {
- if i.NetworkName == "" {
- i.NetworkName = string(i.LocalName)
- }
-
- return i
-}
-
-// bimodalIdentifiers are used to track work to be done in the unselected queue.
-type bimodalIdentifier struct {
- id ProjectIdentifier
- pl []string
-}
-
-type ProjectName string
-
-type atom struct {
- id ProjectIdentifier
- v Version
-}
-
-type atomWithPackages struct {
- a atom
- pl []string
-}
-
-type ProjectDep struct {
- Ident ProjectIdentifier
- Constraint Constraint
-}
-
-// Package represents a Go package. It contains a subset of the information
-// go/build.Package does.
-type Package struct {
- ImportPath, CommentPath string
- Name string
- Imports []string
- TestImports []string
-}
-
-type byImportPath []Package
-
-func (s byImportPath) Len() int { return len(s) }
-func (s byImportPath) Less(i, j int) bool { return s[i].ImportPath < s[j].ImportPath }
-func (s byImportPath) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
-
-// completeDep (name hopefully to change) provides the whole picture of a
-// dependency - the root (repo and project, since currently we assume the two
-// are the same) name, a constraint, and the actual packages needed that are
-// under that root.
-type completeDep struct {
- // The base ProjectDep
- ProjectDep
- // The specific packages required from the ProjectDep
- pl []string
-}
-
-type dependency struct {
- depender atom
- dep completeDep
-}
diff --git a/vendor/github.com/sdboyer/vsolver/version_queue.go b/vendor/github.com/sdboyer/vsolver/version_queue.go
deleted file mode 100644
index 22e7b0c..0000000
--- a/vendor/github.com/sdboyer/vsolver/version_queue.go
+++ /dev/null
@@ -1,117 +0,0 @@
-package vsolver
-
-import (
- "fmt"
- "strings"
-)
-
-type failedVersion struct {
- v Version
- f error
-}
-
-type versionQueue struct {
- id ProjectIdentifier
- pi []Version
- fails []failedVersion
- sm sourceBridge
- failed bool
- hasLock, allLoaded bool
-}
-
-func newVersionQueue(id ProjectIdentifier, lockv atom, sm sourceBridge) (*versionQueue, error) {
- vq := &versionQueue{
- id: id,
- sm: sm,
- }
-
- if lockv != nilpa {
- vq.hasLock = true
- vq.pi = append(vq.pi, lockv.v)
- } else {
- var err error
- vq.pi, err = vq.sm.listVersions(vq.id)
- if err != nil {
- // TODO pushing this error this early entails that we
- // unconditionally deep scan (e.g. vendor), as well as hitting the
- // network.
- return nil, err
- }
- vq.allLoaded = true
- }
-
- return vq, nil
-}
-
-func (vq *versionQueue) current() Version {
- if len(vq.pi) > 0 {
- return vq.pi[0]
- }
-
- return nil
-}
-
-func (vq *versionQueue) advance(fail error) (err error) {
- // The current version may have failed, but the next one hasn't
- vq.failed = false
-
- if len(vq.pi) == 0 {
- return
- }
-
- vq.fails = append(vq.fails, failedVersion{
- v: vq.pi[0],
- f: fail,
- })
- if vq.allLoaded {
- vq.pi = vq.pi[1:]
- return
- }
-
- vq.allLoaded = true
- // Can only get here if no lock was initially provided, so we know we
- // should have that
- lockv := vq.pi[0]
-
- vq.pi, err = vq.sm.listVersions(vq.id)
- if err != nil {
- return
- }
-
- // search for and remove locked version
- // TODO should be able to avoid O(n) here each time...if it matters
- for k, pi := range vq.pi {
- if pi == lockv {
- // GC-safe deletion for slice w/pointer elements
- //vq.pi, vq.pi[len(vq.pi)-1] = append(vq.pi[:k], vq.pi[k+1:]...), nil
- vq.pi = append(vq.pi[:k], vq.pi[k+1:]...)
- }
- }
-
- // normal end of queue. we don't error; it's left to the caller to infer an
- // empty queue w/a subsequent call to current(), which will return an empty
- // item.
- // TODO this approach kinda...sucks
- return
-}
-
-// isExhausted indicates whether or not the queue has definitely been exhausted,
-// in which case it will return true.
-//
-// It may return false negatives - suggesting that there is more in the queue
-// when a subsequent call to current() will be empty. Plan accordingly.
-func (vq *versionQueue) isExhausted() bool {
- if !vq.allLoaded {
- return false
- }
- return len(vq.pi) == 0
-}
-
-func (vq *versionQueue) String() string {
- var vs []string
-
- for _, v := range vq.pi {
- vs = append(vs, v.String())
- }
- return fmt.Sprintf("[%s]", strings.Join(vs, ", "))
-}