Compare commits

...

270 Commits

Author SHA1 Message Date
d666e93ecc repo: Add option review.URL.uploadtopic support
This patch adds the option to include topic branches by adding the
following to a .gitconfig file:

    uploadtopic = true

This option is only read in when the -t option is not already
specified at the command line.

Change-Id: I0e0eea49438bb4e4a21c2ac5bd498b68b5a9a845
2012-06-05 08:01:29 -07:00
3f61950f01 Use gerrit.googlesource.com/git-repo as the default URL
This is basically the same repository, but may be slightly more
up-to-date than the one on code.google.com/p/git-repo.

Change-Id: I5c99539f53231958eefb6993f00997c9adf0a3c9
2012-06-05 07:57:24 -07:00
4fd38ecc3a Detect git is not installed
Fix detection for Git not being in $PATH during the initial
run of `repo init` in a new directory.

Change-Id: I2b1fcce1fb8afc47271f5c3bd2a28369009b2fb7
2012-06-05 07:56:09 -07:00
9fae805e04 Pass http_proxy as -c http.proxy on Mac OS X
The system libcurl library seems to ignore http_proxy on Mac OS
X systems. Copy the http_proxy environment variable (if set) as
`git -c http.proxy` whenever running a Git command.

Change-Id: I0ab29336897178f70b85092601f9fcc306dd17e1
2012-05-25 08:21:37 -07:00
6a927c5d19 hooks/pre-auto-gc: look in sysfs to see if a battery is known.
Barring any kernel bugs, if this directory exists and there is
a symlink in there (which will point to the battery object),
that means there is a battery known to the kernel.

No symlink should mean no battery as far as the kernel is concerned.

Change-Id: Ib12819a5bbb816f0ae5ca080e5812a2db08441e9
2012-05-25 02:25:59 -07:00
eca119e5d6 Allow projects with groups=None
Mirror manifest and repo projects are outside the manifest and
have no groups.  Allow project groups to be None for these
projects.

Change-Id: I3e1c4add894fe1c43aa4e77a1fc1558aa10dd191
2012-05-24 15:40:05 -07:00
6ba6ba0ef3 Fix initial sync broken by sync-c option
Change-Id: I308753da8944e6ce5c46e3bfee1bcd41d5b7e292
2012-05-24 09:46:50 -07:00
23acdd3f14 Parse manifest and local_manifest together
Combine manifest and local_manifest into a single list of elements
before parsing.  This will allow elements in the local_manifest to
affect elements in the main manifest.

Change-Id: I4d34c9260b299a76be2960b07c0c3fe1af35f33c
2012-05-24 09:32:15 -07:00
2644874d9d ManifestXml: add include support
Having the ability to include other manifests is a very practical feature
to ease the managment of manifest. It allows to divide a manifest into separate
files, and create different environment depending  on what we want to release

You can have unlimited recursion of include, the manifest configs will simply be concatenated
as if it was in a single file.

command "repo manifest" will create a single manifest, and not recreate the manifest hierarchy

for example:
Our developement manifest will look like:

<?xml version='1.0' encoding='UTF-8'?>
<manifest>
  <default revision="platform/android/main" remote="intel"/>
  <include name="server.xml"/> <!-- The Server configuration -->
  <include name="aosp.xml" />  <!-- All the AOSP projects -->
  <include name="bsp.xml" />   <!-- The BSP projects that we release in source form -->
  <include name="bsp-priv.xml" /> <!-- The source of the BSP projects we release in binary form -->
</manifest>

Our release manifest will look like:

<?xml version='1.0' encoding='UTF-8'?>
<manifest>
  <default revision="platform/android/release-ext" remote="intel"/>
  <include name="server.xml"/> <!-- The Server configuration -->
  <include name="aosp.xml" />  <!-- All the AOSP projects -->
  <include name="bsp.xml" />   <!-- The BSP projects that we release in source form -->
  <include name="bsp-ext.xml" /> <!-- The PREBUILT version of the BSP projects we release in binary form -->
</manifest>

And it is also easy to create and maintain feature branch with a manifest that looks like:

<?xml version='1.0' encoding='UTF-8'?>
<manifest>
  <default revision="feature_branch_foobar" remote="intel"/>
  <include name="server.xml"/> <!-- The Server configuration -->
  <include name="aosp.xml" />  <!-- All the AOSP projects -->
  <include name="bsp.xml" />   <!-- The BSP projects that we release in source form -->
  <include name="bsp-priv.xml" /> <!-- The source of the BSP projects we release in binary form -->
</manifest>

Signed-off-by: Brian Harring <brian.harring@intel.com>
Signed-off-by: Pierre Tardy <pierre.tardy@intel.com>
Change-Id: I833a30d303039e485888768e6b81561b7665e89d
2012-05-24 09:07:24 -07:00
3d125940f6 repo download: add --ff-only option
Allows to ff-only a gerrit patch
This patch is necessary to automatically ensure that the patch will
be correctly submitted on ff-only gerrit projects

You can now use:
repo download (--ff-only|-f) project changeid/patchnumber

This is useful to automate verification of fast forward status of a patch
in the context of build automation, and commit gating (e.g. buildbot)

Change-Id: I403a667557a105411a633e62c8eec23d93724b43
Signed-off-by: Erwan Mahe <erwan.mahe@intel.com>
Signed-off-by: Pierre Tardy <pierre.tardy@intel.com>
2012-05-24 09:04:20 -07:00
a94f162b9f repo download: add --revert option
BZ: 4779
Allows to revert a gerrit patch
This patch is necessary for the on-demand creation of
engineering builds using buildbot

You can now use:
repo download [--revert|-r project changeid/patchnumber

This is useful to automate reverting of a patch
in the context of build automation, and regression bisection

Change-Id: I3985e80e4b2a230f83526191ea1379765a54bdcf
Signed-off-by: Erwan Mahe <erwan.mahe@intel.com>
Signed-off-by: Pierre Tardy <pierre.tardy@intel.com>
2012-05-24 09:03:10 -07:00
e5a2122e64 repo download: add --cherry-pick option
default option uses git checkout, and thus overwrite the previous
checkouts.  this is a problem for automated builds of several
changesets in the same project for daily builds of pending submission

You can now use:
repo download [--cherry-pick|-c] project changeid/patchnumber

This will parse the manifest, cd to the corresponding project
download the changes to FETCH_HEAD and cherry-pick the result.

This is useful to automate cherry-picking of a patch
in the context of build automation, and commit gating (e.g. buildbot)

Change-Id: Ib638afd87677f1be197afb7b0f73c70fb98909fe
Signed-off-by: Pierre Tardy <pierre.tardy@intel.com>
2012-05-24 09:02:38 -07:00
ccf86432b3 Avoid failing concat for multi-encoding filenames
repo status should output filenames one by one instead of trying to
build a string from incompatible encodings (like utf-8 and sjis
filenames)

Change-Id: I52282236ececa562f109f9ea4b2e971d2b4bc045
2012-05-24 08:58:10 -07:00
79770d269e Add sync-c option to manifest
There are use-cases when fetching all branch is impractical and
we really need to fetch only one branch/tag.
e.g. there is a large project with binaries and every update of a
binary file is put to a separate branch.
The whole project history might be too large to allow users fetch it.

Add 'sync-c' option to 'project' and 'default' tags to make it possible
to configure 'sync-c' behavior at per-project and per-manifest level.

Note that currently there is no possibility to revert boolean flag from
command line. If 'sync-c' is set in manifest then you cannot make
full fetch by providing a repo tool argument.

Change-Id: Ie36fe5737304930493740370239403986590f593
2012-04-23 14:10:52 -07:00
c39864f5e1 Treat groups= as default
Previous incarnations of groups support left "groups=" in the
repo .config, which is now treated as "delete all the projects".
Treat empty groups configuration the same as no groups
configuration.

Change-Id: I57dab8dac55bdbf4cc181e2748cd2e4e510764f5
2012-04-23 13:43:41 -07:00
5465727e53 Fix syntax errors in subcmds/init.py
Fixes three errors:
Python doesn't like the line wrap after 'and'.
platform.system is a function, needs to be platform.system().
Typo all_platfroms instead of all_platforms.

Change-Id: Ia875e521bc01ae2eb321ec62d839173c00f86c2d
2012-04-23 13:43:41 -07:00
d21720db31 Add a --platform flag
Projects may optionally specify their platform
(eg, groups="platform-linux" in the manifest).

By default, repo will automatically detect the platform. However,
users may specify --platform=[auto|all|linux|darwin].

Change-Id: Ie678851fb2fec5b0938aede01f16c53138a16537
2012-04-23 12:50:00 -07:00
971de8ea7b Refine groups functionality
Every project is in group "default".  "-default" does not remove
it from this project.  All group names specified in the manifest
are positive names as opposed to a mix of negative and positive.

Specified groups are resolved in order.  If init is supplied with
--groups="group1,-group2", the following describes the project
selection when syncing:

  * all projects in "group1" will be added, and
  * all projects in "group2" will be removed.

Change-Id: I1df3dcdb64bbd4cd80d675f9b2d3becbf721f661
2012-04-23 12:39:05 -07:00
24c1308840 Add project annotation handling to repo
Allow the optional addition of "annotation" nodes nested under
projects.  Each annotation node must have "name" and "value"
attributes.  These name/value pairs will be exported into the
environment during any forall command, prefixed with "REPO__"

In addition, an optional "keep" attribute with case insensitive "true"
or "false" values can be included to determine whether the annotation
will be exported with 'repo manifest'

Change-Id: Icd7540afaae02c958f769ce3d25661aa721a9de8
Signed-off-by: James W. Mills <jameswmills@gmail.com>
2012-04-23 12:35:08 -07:00
b962a1f5e0 Check if SHA1 presents in repository
Previously repo had incorrect code that did not really check
if sha1 presents in a project. It worked for tags though.

Check if a revision (either tag or sha1) is present by using
'git rev_parse' functionality.

Change-Id: I1787f3348573948573948753987394839487572b
2012-04-23 11:09:17 -07:00
5acde75e5d Add manifest groups
Allows specifying a list of groups with a -g argument to repo init.
The groups act on a group= attribute specified on projects in the
manifest.
All projects are implicitly labelled with "default" unless they are
explicitly labelled "-default".
Prefixing a group with "-" removes matching projects from the list
of projects to sync.
If any non-inverted manifest groups are specified, the default label
is ignored.

Change-Id: I3a0dd7a93a8a1756205de1d03eee8c00906af0e5
Reviewed-on: https://gerrit-review.googlesource.com/34570
Reviewed-by: Shawn Pearce <sop@google.com>
Tested-by: Shawn Pearce <sop@google.com>
2012-04-13 09:46:00 -07:00
d67872d2f4 Option for 'repo diff' to generate output suitable for 'patch' cmd
The -u option causes 'repo diff' to generate diff output
with file paths relative to the repository root,
so the output can be applied to the Unix 'patch' command.
The name '-u' was selected for convenience, because
both 'diff' and 'git diff' accept the option with the same name
to generate an 'unified diff' output suitable for 'patch' command.

Change-Id: I79c8356db4ed20ecaccc258b3ba139db76666fe0
Reviewed-on: https://gerrit-review.googlesource.com/34380
Reviewed-by: Shawn Pearce <sop@google.com>
Tested-by: Shawn Pearce <sop@google.com>
2012-04-13 09:20:10 -07:00
e9d6b611c5 New flag for repo upload: --current_branch (--cbr)
A convenient equivalent to `repo upload --br=<current git branch>`.

Note that the head branch will be selected for each project
uploaded by repo, so different branches may be uploaded for
different projects.

Change-Id: I10ad8ceaa63f055105c2d847c6e329fa4226dbaf
2012-04-06 10:43:36 -04:00
c3d2f2b76f Ignore /clone.bundle on HTTP 401, 403 and 404
401: Unauthorized, authentication may be required. This is usually
     handled internally by the HTTP client in Python. If it reaches
     our code in repo, the Python HTTP client didn't find a password
     in ~/.netrc that it could use.

403: Authentication was supplied, but is incorrect. It might be
     that the CDN doesn't want to offer this clone.bundle file
     to the client, but the Git fetch operation would still be
     successful. This might arise if branch level read controls
     were used in Gerrit Code Review and the /clone.bundle file
     contained branches not visible to the client.

404: The server has no /clone.bundle file available.

In all of these cases, sliently ignore the /clone.bundle file HTTP
error and let the Git operation take over.

Change-Id: I1787f3cac86b017351952bbb793fe5874d83c72b
2012-03-22 14:18:40 -07:00
cd7c5deca0 Do not change branch.foo.merge in case of manifest sync
In case of manifest/smart sync repo changes ".merge" config
option from branch to SHA. Doing 'repo upload' fails as
repo tries to upload to a remote branch that looks like SHA
(e.g. refs/for/23423423423423423423423)

Do not update the .merge in case if revision is SHA.

Change-Id: I9139708fa17f21eec5a7e23c3255333626bf529e
2012-03-20 14:11:56 -07:00
e02ac0af2e sync: --no-clone-bundle disables the clone bundle support
Change-Id: Ia9ed7da8451b273c1be620c3dd0dcad777b29096
2012-03-14 15:38:28 -07:00
898e12a2d9 Permit - in URL schemes for special URLs
Clients might be using their own special git-remote-* helper that
has a hypen in its name. Permit - in the scheme part of the URL
when trying to decide if it is an SSH URL and assume it is *not*
SSH if the URL matches "foo-bar://" style.

Change-Id: I7ba2d810a614f6e605a441d5972902c4a14e73fd
2012-03-14 15:28:22 -07:00
ae0a36c9a5 Add support for Apache Digest authentication for repo init.
repo tool supports only Basic authentication for now. For those
who want to use this tool to manage their own projects, in case
the administrator has configured the Apache server with Digest
authentication method, users will fail to be authenticated when
they run the command 'repo init'.
Add the digest authentication password manager to the handler
list will fix this issue.

Since Git HTTP protocol will require the user be authenticated
for fetch operation first before pushing commits to the remote,
it is unlikely for the administrator to implement anonymous
read (aka pull) access and write access (aka push) for
authenticated user. Both read and write have to be authenticated.
Be aware that the user may have to add an extra line in his
~/.netrc file:
-------------------
account example.com
-------------------
where 'example.com' is the realm for Apache Digest authentication.

Change-Id: I76eb27b205554426d9ce1965deaaf727b87916cd
Signed-off-by: Xiaodong Xu <stid.smth@gmail.com>
2012-03-14 15:01:34 -07:00
76abcc1d1e repo status to print project name on clean gits
repo status just prints "# on branch oprofile" if you have branched
in clean status. This doesn't really tell which branch is meant.

Instead we can use the same syntax with modified gits which will
give us detailed information.

Change-Id: I55fe5154d278e10a814281dd2ba501ec6e956730
2012-03-12 12:25:40 -07:00
d315382572 Add 'rebase="false"' attribute to the <project/> XML.
This new attribute can prevent 'repo sync' from automatically rebasing.

I hit a situation in where one of the git repositories I was tracking
was actually an external repository that I wanted to pull commits
into and merge myself. (NOT rebase, since that would lose the merge
history.) In this case, I'm not using 'repo upload', I'm manually
managing the merges to and from this repository.

Everything was going great until I typed 'repo sync' and it rebased
my manually-merged tree. Hence the option to skip it.

Change-Id: I965e0dd1acb87f4a56752ebedc7e2de1c502dbf8
2012-03-12 12:24:22 -07:00
43bda84362 Avoid missing content-length header in project.py
Occassionally, the content-length may be missing when using urlib
in python 2.6 and 2.7.  This change assumes the value of the header is
0 if it doesn't exist

Change-Id: Iaf1c8a796bc667823d4d7c30f9b617644b271d00
2012-03-12 12:13:15 -07:00
9b017dab46 Update SUBMITTING_PATCHES
The review server is now at gerrit-review.googlesource.com.

Change-Id: I4be67fdb1876eb2e2af4420ac63557596b9e233b
2012-02-28 18:54:33 -08:00
e9dc3b3368 sync: Add manifest_name parameter
This parameter changes the manifest used by 'repo sync' for only
this execution. It should be useful for developers wishing to get
the repo temporarily into a known state, without clobbering their
existing manifest.

Tested by shifting Chrome OS between minilayout and full, and
between several release-builder-generated manifests.

Change-Id: I14194b665195b0e78f368d9ec8b8a83227af2627
2012-01-26 12:32:36 -05:00
c9571423f8 upload: Support uploading to Gerrit over https://
If SSH is not available, Gerrit returns NOT_AVAILABLE to the /ssh_info
query made by repo upload. In this case fallback to the /p/$PROJECT URL
that Gerrit also exports and use that for uploads.

Change-Id: I1e3e39ab709ecc0a692614a41a42446426f39c08
2012-01-11 16:18:40 -08:00
34fb20f67c Revert "Default repo manifest settings in git config"
This reverts commit ee1c2f5717.

This breaks a lot of buildbot systems. Rolling it back for now
until we can understand what the breakage was and how to fix it.
2011-11-30 13:41:02 -08:00
ecff4f17b0 Describe the repo launch version in repo version
repo version v1.7.8
         (from https://android.googlesource.com/tools/repo.git)
  repo launcher version 1.14
         (from /home/sop/bin/repo)
  git version 1.7.8.rc2.256.gcc761
  Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
  [GCC 4.4.3]

Change-Id: Ifcbe5b0e226a1a6ca85455eb62e4da5e9a0f0ca0
2011-11-29 15:02:15 -08:00
cc14fa9820 Improve error handling when reading loose refs
When repo is trying to figure out branches the repository has by
traversing refs/heads, add exception handling for readline.

Change-Id: If3b2a3720c6496f52f629aa9a2539f186d6ec882
2011-11-29 14:43:04 -08:00
3ce2a6b46b Propagate result codes from subcmds to sys.exit().
Allows scripts driving repo to know when git failures have
occurred, not just repo internal errors.

Change-Id: Id20fbbb405c35a148e72c87b822da3f3bf93839c
2011-11-29 14:38:19 -08:00
841be34968 Don't prompt the user for name/email unless necessary
If the user has already configured a workspace, use these values
when re-running 'repo init'.

Otherwise, if the user has global name and e-mail set, use these.

It's always possible to override this and be prompted by specifying
--config-name when running 'repo init'.

Change-Id: If45f0e4b14884071439fb02709dc5cb53f070f60
2011-11-29 14:31:56 -08:00
ee1c2f5717 Default repo manifest settings in git config
A default manifest URL can be specified using:
  git config --global repo-manifest.<id>.url <url>

A default manifest server can be specified using:
  git config --global repo-manifest.<id>.server <url>

A default git mirror reference can be specified using:
  git config --global repo-manifest.<id>.reference <path>

This will allow the user to use 'repo init -u <id>' as
a shorter alternative to specifying the full URL.

Also, manifest server will not have to be specified in the
manifest XML and the reference will not have to be specified
on the command line. If they are, they will override these
default values however.

Change-Id: Ifdbc160bd5909ec7df9efb0c5d7136f1d9351754
Signed-off-by: Victor Boivie <victor.boivie@sonyericsson.com>
2011-11-29 14:24:58 -08:00
6a1f737380 Added remote destination branch information when uploading.
Several times one have done an upload only to later notice in gerrit
that the upload was done to the wrong branch as the git has not yet
been branched for the current git. This change will make repo print
what the destination branch is when asking the user if she wants to
go through with the upload.

Change-Id: Ia9c3a92a6a04c022edfebf4f8d651ac062bb1f3b
2011-11-29 14:01:57 -08:00
e9311273dd repo: capitalize default prompt char
It is common in command line tools to indicate what the default answer
will be if the user simply hits enter.  In repo, the display is just
"y/n" with no indication as to which is the default.  So change the n
to N in the messages since that is how repo operates.

Change-Id: I81819ae630355072eb0365e59168b0921289498f
2011-11-29 12:38:52 -08:00
605a9a487b Fixed UnicodeDecodeError while uploading changes.
When commit with comment that has non-ASCII characters,
UnicodeDecodeError will be raised
while uploading multiple project/branch changes.
Because some strings in script are not str type, but unicode.
So all the strings are decoded to unicode,
and python use ascii to do this,
it can not decode non-ASCII characters,
so UnicodeDecodeError raised.

Signed-off-by: chenguodong <chenguodong@huawei.com>

Change-Id: I46447f489a4b9760a5899c7ba9d764b688594e46
2011-11-29 12:11:41 -08:00
2a32f6afa6 Fix typo
Change-Id: Idd68ad0a34fcf4bd4e18b0248f50187a539d610a
2011-11-29 12:09:35 -08:00
498fe90b45 Stabilize repo communication with subprocesses.
Make repo use the standard way in python to work with pipes.
Communication via pipes to sub processes is done by calling
communicate(). This will make repo not hang every now and
then.

Change-Id: Ibe2c4ecbdbcbe72f0b725ca50d54088e5646fc5d
2011-11-29 11:54:58 -08:00
53d6f4d17e Add a sync flag that fetches only current branch
There is also shortcuts in case if the "current branch" is
a persistent revision such as tag or sha1. We check if the
persistent revision is present locally and if it does - do
no fetch anything from the server.

This greately reduces sync time and size of the on-disk repo

Change-Id: I23c6d95185474ed6e1a03c836a47f489953b99be
2011-11-03 13:08:27 -07:00
9d8f914fe8 Remove extra '/' in RemoteSpec
urljoin appends a '/' if only the domain is in the url path.  This
change strips that off before creating a RemoteSpec
2011-11-03 13:05:14 -07:00
ceea368e88 Correctly name projects when mirroring
A bug introduced by relative urls caused projects such as manifest.git
to be placed in the root directory instead of the directory they should
by in.

This fix creates and refers to a resolvedFetchUrl in the _XmlRemote
class in order to get a fetchUrl that is never relative.
2011-10-20 11:01:38 -07:00
b660539c4a Fix sync on Python 2.6.6
Python 2.6.6 has the same bug as Python 2.7, where HTTP
authentication just stops working, but does not have the
setter method to clear the retry counter. Work around by
setting the field directly if it exists.

Change-Id: I6a742e606bb7750dc66c33fc7c5d1310541db2c8
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-11 15:58:07 -07:00
752371d91b help: Fix help sync
help sync crashed as sync required the manifest to be configured to
create the option parser, as the default number of jobs is required.

Change-Id: Ie75e8d75ac0e38313e4aab451cbb24430e84def5
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-11 15:23:41 -07:00
1a68dc58eb upload: Honor REPO_HOST_PORT_INFO environment variable
REPO_HOST_PORT_INFO can be set to 'host:port' and be used
instead of the review URL given in the manifest.

Change-Id: I440bdecb2c2249fe5285ec5d0c28a937b4053450
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-11 14:12:46 -07:00
df5ee52050 Fix Python 2.4 support
Change-Id: I89521ae52fa564f0d849cc51e71fee65b3c47bab
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-11 14:06:11 -07:00
fab96c68e3 Work around Python 2.7 urllib2 bug
If the remote is using authenticated HTTP, but does not have
$GIT_URL/clone.bundle files in each repository, an initial sync
would fail around 8 projects in due to the library not resetting
the number of failures after getting a 404.

Work around this by updating the retry counter ourselves.

The urllib2 library is also not thread-safe. Make it somewhat
safer by wrapping the critical section with a lock.

Change-Id: I886e2750ef4793cbe2150c3b5396eb9f10974f7f
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-11 12:18:07 -07:00
bf1fbb20ab Fix AttributeError: 'HTTPError' object has no attribute 'reason'
Not every version of urllib2 supplies a reason object on the
HTTPError exception that it throws from urlopen().  Work around
this by using str(e) instead and hope the string formatting includes
sufficient information.

Change-Id: I0f4586dba0aa7152691b2371627c951f91fdfc8d
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-11 09:31:58 -07:00
29472463ba Work around Python 2.7 failure to initialize base class
urllib2 returns a malformed HTTPError object in certain situations.
For example, urllib2 has a couple of places where it creates an
HTTPError object with no fp:

  if self.retried > 5:
    # retry sending the username:password 5 times before failing.
    raise HTTPError(req.get_full_url(), 401, "basic auth failed",
                    headers, None)

When it does that, HTTPError's ctor doesn't call through to
addinfourl's ctor:

  # The addinfourl classes depend on fp being a valid file
  # object.  In some cases, the HTTPError may not have a valid
  # file object.  If this happens, the simplest workaround is to
  # not initialize the base classes.
  if fp is not None:
    self.__super_init(fp, hdrs, url, code)

Which means the 'headers' slot in addinfourl is not initialized and
info() fails.  It is completely insane that urllib2 decides not to
initialize its own base class sometimes.

Change-Id: I32a0d738f71bdd7d38d86078b71d9001e26f1ec3
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-11 09:24:07 -07:00
c325dc35f6 sync: Fetch after applying bundle and retry after errors
After a $GIT_URL/clone.bundle has been applied to the new local
repository, perform an incremental fetch using `git fetch` to ensure
the local repository is up-to-date. This allows the hosting server
to offer stale /clone.bundle files to bootstrap a new client.

If a single git fetch fails, it may succeed again after a short
delay.  Transient failures are typical in environments where the
remote Git server happens to have limits on how many requests it
can serve at once (the anonymous git daemon, or an HTTP server).
Wait a randomized delay between 30 and 45 seconds and retry the
failed project once.  This delay gives the site time to recover
from a transient traffic spike, and the randomization makes it less
likely that a spike occurs again from all of the same clients.

Change-Id: I97fb0fcb33630fb78ac1a21d1a4a3e2268ab60c0
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-10-03 08:30:24 -07:00
f322b9abb4 sync: Support downloading bundle to initialize repository
An HTTP (or HTTPS) based remote server may now offer a 'clone.bundle'
file in each repository's Git directory. Over an http:// or https://
remote repo will first ask for '$URL/clone.bundle', and if present
download this to bootstrap the local client, rather than relying
on the native Git transport to initialize the new repository.

Bundles may be hosted elsewhere. The client automatically follows a
HTTP 302 redirect to acquire the bundle file. This allows servers
to direct clients to cached copies residing on content delivery
networks, where the bundle may be closer to the end-user.

Bundle downloads are resumeable from where they last left off,
allowing clients to initialize large repositories even when the
connection gets interrupted.

If a bundle does not exist for a repository (a HTTP 404 response
code is returned for '$URL/clone.bundle'), the native Git transport
is used instead. If the client is performing a shallow sync, the
bundle transport is not used, as there is no way to embed shallow
data into the bundle.

Change-Id: I05dad17792fd6fd20635a0f71589566e557cc743
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-28 10:07:36 -07:00
db728cd866 Allow remote url to be relative to manifst url 2011-09-28 10:07:01 -07:00
c4657969eb sync: Update default -j flag from manifest
If the manifest is updated and the default sync-j attribute
was modified, honor it during this sync session if the user
has not supplied a -j flag on the command line.

Change-Id: I127ee5c779e2bbbb40b30bddc10ec1fa704b3bf3
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-26 09:08:44 -07:00
7b947de1ee Ignore missing ~/.netrc
Change-Id: Ifa6065d57a6cb11ad57ddd44bc88d9690fe234ab
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-23 11:50:31 -07:00
6392c87945 sync: Allow -j to have a default in manifest
This permits manifest authors to suggest a number of parallel
fetch operations against a remote server. For example, Gerrit
Code Review servers support queuing of requests and processes
them in first-in, first-out order. Running concurrent fetches
can utilize multiple CPUs on the Gerrit server, but will also
decrease overall operation latency by having the request put
into the queue ready to execute as soon as a CPU is free.

Change-Id: I3d3904acb6f63516bae4b071c510ad57a2afab18
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-22 18:08:27 -07:00
97d2b2f7a0 sync: Limit -j to file descriptors
Each worker thread requires at least 3 file descriptors to run the
forked 'git fetch' child to operate against the local repository.
Mac OS X has the RLIMIT_NOFILE set to 256 by default, which means
a sync -j128 often fails when the workers run out of pipes within
the Python parent process.

Change-Id: I2cdb14621b899424b079daf7969bc8c16b85b903
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-22 18:08:26 -07:00
3a0e782790 Add global option --time to track execution
This prints a simple line after a command ends, providing
information about how long it executed for using real wall
clock time. Its mostly useful for looking at sync times.

Change-Id: Ie0997df0a0f90150270835d94b58a01a10bc3956
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-22 18:08:18 -07:00
490d09a314 Support units in progress messages
This allows our progress meter to be used for bytes transferred, by
setting the units to KB or MB to let the user know the size.

Change-Id: Ie8653d4a40d79439026c18bd51915845b2c5bba9
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-19 14:52:57 -07:00
13111b4e97 Add support for url.*.insteadof
Teach repo how to resolve URLs using the url.insteadof feature
that C Git natively uses during clone, fetch or push. This will
later allow repo to resolve a URL before accessing it directly.
We do not want to pre-resolve things and store the resolved URL
into individual projects, as this makes it impossible for the
user to undo the insteadof mapping at a later date.

Change-Id: I0f62e811197c53fbc8a8be424e3cabf4ed07b4cb
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-19 14:52:57 -07:00
bd0312a484 Support ~/.netrc for HTTP Basic authentication
If repo tries to access a URL over HTTP and the user needs to
authenticate, offer a match from ~/.netrc. This matches behavior
with the Git command line client.

Change-Id: I803f3c5d562177ea0330941350cff3cc1e1bef08
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-19 14:52:32 -07:00
334851e4b6 Enhance HTTP support
Setting REPO_CURL_VERBOSE=1 in the environment will register a debug
level HTTPHandler on the urllib2 library, showing HTTP requests and
responses on the stderr channel of repo.

During any HTTP or HTTPS request created inside of the repo process,
a custom User-Agent header is now defined:

  User-Agent: git-repo/1.7.5 (Linux) git/1.7.7 Python/2.6.5

Change-Id: Ia5026fb1e1500659bd2af27416d85e205048bf26
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-19 14:51:47 -07:00
014d060989 Honor http_proxy variable globally
If the http_proxy environment variable was set, honor it during
the entire repo session for any Python created HTTP connections.

Change-Id: Ib4ae833cb2cdd47ab0126949f6b399d2c142887d
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-09-11 13:11:04 -07:00
44da16e8a0 Change default REPO_URL to code.google.com
Change-Id: If7700daf96fb8f3ee449e5774017272ef31b4b44
2011-09-05 14:16:49 -07:00
65e0f35fda Add commit-msg hook also for manifest project
The manifest project has - by design - not a review URL associated
with it. It is actually not even a 'project' in repo's sense.

This will prevent the commit-msg hook from being added, which is
not necessarily wanted as the project is managed in gerrit.

This commit will enable the commit-msg hook, which in turn will
add the Change-Id-line to every new commit in it. This simplifies
replacing patch sets (by git push ... refs/for/...).

Change-Id: I42d0f6fd79e6282d9d47074a3819e68d968999a7
Signed-off-by: Victor Boivie <victor.boivie@sonyericsson.com>
2011-07-20 07:34:23 -07:00
08c880db18 Smart tag support
This is an evolution of 'smart-sync' that adds a new option, -t,
that allows you to specify a tag/label to use instead of the
"latest good build" on the current manifest branch which -s does.

Signed-off-by: Victor Boivie <victor.boivie@sonyericsson.com>
Change-Id: I8c20fd91104a6aafa0271d4d33f6c4850aade17e
2011-07-20 07:13:48 -07:00
a101f1c167 Honor 'http_proxy' environment variable
'repo upload' makes http request using urllib2 python library.
Unfortunately this library does not work (by default) in case
if the user behind a proxy.

This change adds proxy handler in case if 'http_proxy' environment
variable is set.

Change-Id: Ic4176ad733fc21bd5b59661b3eacc2f0a7c3c1ff
2011-07-20 07:11:00 -07:00
49cd59bc86 Add --depth option to main repo wrapper.
See related repo change:
  https://review.source.android.com/#change,22722

Change-Id: I9bdd86971c94604477b91cdf47d6fac2c0bc186e
2011-06-14 16:59:19 -07:00
30d452905f Add a --depth option to repo init.
Change-Id: Id30fb4a85f4f8a1847420b0b51a86060041eb5bf
2011-06-09 16:48:23 -07:00
d6c93a28ca Add branch support to repo upload
This commit adds a --br=<branch> option to repo upload.

repo currently examines every non-published branch. This is problematic
for my workflow. I have many branches in my kernel tree. Many of these
branches are based off of upstream remotes (I have many remotes) and
will never be uploaded (they'll get sent upstream as a patch).

Having repo scan these branches adds to my upload processing time
and clutters the branch selection buffer. I've also seen repo get
confused when one of my branches is 1000s of commits different from
m/master.

Change-Id: I68fa18951ea59ba373277b57ffcaf8cddd7e7a40
2011-05-26 10:49:39 -07:00
d572a13021 Added repo cherry-pick command
It is undesired to have the same Change-Id:-line for two separate
commits, and when cherry-picking, the user must manually change it.

If this is not done, bad things may happen (such as when the user
is uploading the cherry-picked commit to Gerrit, it will instead
see it as a new patch-set for the original change, or worse).

repo cherry-pick works the same was as git cherry-pick, except that
it replaces the Change-Id with a new one and adds a reference
back to the commit from where it was picked.

On failures (when git can not successfully apply the cherry-picked
commit), instructions will be written to the user.

Change-Id: I5a38b89839f91848fad43386d43cae2f6cdabf83
2011-04-07 17:19:06 -04:00
3ba5f95b46 Fixed repo checkout error message when git reports errors.
In the current version of repo checkout, we often get the error:
  error: no project has branch xyzzy

...even when the actual error was something else.  This fixes it
to only report the 'no project has branch' when that is actually true.

This fix is very similar to one made for 'repo abandon':
  https://review.source.android.com/#change,22207

The repo checkout error is filed as: <http://crosbug.com/6514>

TEST=manual

A sample creating a case where 'git checkout' will fail:

  $ repo start branch1 .
  $ repo start branch2 .
  $ touch bogusfile
  $ git add bogusfile
  $ git commit -m "create bogus file"
  [branch2 f8b6b08] create bogus file
   0 files changed, 0 insertions(+), 0 deletions(-)
   create mode 100644 bogusfile
  $ echo "More" >> bogusfile
  $ repo checkout branch1 .
  error: chromite/: cannot checkout branch1

A sample case showing that we still fail if no project has a branch:

  $ repo checkout xyzzy .
  error: no project has branch xyzzy

Change-Id: I48a8e258fa7a9c1f2800dafc683787204bbfcc63
2011-04-07 16:55:35 -04:00
2630dd9787 Fixed problems w/ 2nd repo init if first repo init had bad URL.
This is the simplest fix: if we had problems syncing the
manifest.git directory and we were the ones that created it,
we should delete it.  This doesn't try to do anything complex
like try to recover from a .repo directory that got broken in
some other way.

This is filed as: <http://crosbug.com/13403>

TEST=manual

Init once with a bad URL:
  $ repo init -u http://foobar.example.com
  Getting manifest ...
     from http://foobar.example.com
  Connection closed by 172.22.121.77
  error: Couldn't resolve host 'foobar.example.com' while accessing http://foobar.example.com/info/refs

  fatal: HTTP request failed
  fatal: cannot obtain manifest http://foobar.example.com

Init again: identical to the first.  Good:
  $ repo init -u http://foobar.example.com
  Getting manifest ...
     from http://foobar.example.com
  Connection closed by 172.22.121.77
  error: Couldn't resolve host 'foobar.example.com' while accessing http://foobar.example.com/info/refs

  fatal: HTTP request failed
  fatal: cannot obtain manifest http://foobar.example.com

Init with correct URL:
  $ repo init -u http://git.chromium.org/git/manifest -m minilayout.xml
  Getting manifest ...
     from http://git.chromium.org/git/manifest
  [ ... cut ... ]

  repo initialized in /.../repoiniterr

Try a bad URL after a good one; it doesn't get saved (good):
  $ repo init -u http://foobar.example.com
  Connection closed by 172.22.121.77
  error: Couldn't resolve host 'foobar.example.com' while accessing http://foobar.example.com/info/refs

  fatal: HTTP request failed
  fatal: cannot obtain manifest http://foobar.example.com

Just to confirm, I can still do a good one after a bad...
  $ repo init -u http://git.chromium.org/git/manifest -m minilayout.xml

  Your Name  [George Washington]:
  Your Email [george@washington.example.com]:

  Your identity is: George Washington <george@washington.example.com>
  is this correct [y/n]? y

  repo initialized in /.../repoiniterr

Change-Id: I1692821a330d97b1d218b2e191a93245b33f2362
2011-04-07 16:51:50 -04:00
dafb1d68d3 Fixed repo abandon to give better messages.
The main fix is to give an error message if nothing was actually
abandoned.  See <http://crosbug.com/6041>.

The secondary fix is to list projects where the abandon happened.
This could be done in a separate CL or dropped altogether if requested.

TEST=manual

$ repo abandon dougabc; echo $?
Abandon dougabc: 100% (127/127), done.
Abandoned in 2 project(s):
  chromite
  src/platform/init
0

$ repo abandon dougabc; echo $?
Abandon dougabc: 100% (127/127), done.
error: no project has branch dougabc
1

$ repo abandon dougabc; echo $?
Abandon dougabc: 100% (127/127), done.
error: chromite/: cannot abandon dougabc
1

Change-Id: I79520cc3279291acadc1a24ca34a761e9de04ed4
2011-04-07 16:49:23 -04:00
4655e81a75 Add option to check status of projects in parallel.
Change-Id: I6ac653f88573def8bb3d96031d3570ff966251ad
2011-04-07 16:36:42 -04:00
723c5dc3d6 Fix parallel sync on python < 2.6.
Event.isSet was renamed to is_set in 2.6, but we should
use the earlier syntax to avoid breaking compatibility
with older Python installations.

Change-Id: I41888ed38df278191d7496c1a6eed15e881733f4
2011-04-04 11:34:47 -04:00
e6a0eeb80d sync: Fix syntax error on Python 2.4
Change-Id: I371d032d5a1ddde137721cbe2b24bfa38f20aaaa
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-03-22 19:04:47 -07:00
0960b5b53d Creating rr-cache
If git-rerere is enabled, it uses the rr-cache directory that
repo currently creates a symlink from, but doesn't create the
destination directory (inside the project's directory). Git
will then complain during merges and rebases.

This commit creates the rr-cache directory inside the project.

Change-Id: If8b57a04f022fc6ed6a7007d05aa2e876e6611ee
2011-03-17 09:19:51 -07:00
fc06ced9f9 Make 'repo sync -jN' exit with an error code in the case of sync errors.
The bug that this is fixing is described here:

http://code.google.com/p/chromium-os/issues/detail?id=6813

This fix allows the helper threads to signal the main thread that they
saw an error.  When the main thread sees the error, it will let all
existing threads finish, then exit with an error.

Change-Id: If3019bc6b0b3ab9304d49ed2eea53e9d57f3095a
2011-03-17 09:17:42 -07:00
fce89f218a Add 'list' command to repo.
This isn't a required command, but might be more discoverable for
repo newbies?

Change-Id: If357346f234774d42e04e024e65acdaf6dca6c62
2011-03-16 12:55:44 -07:00
37282b4b9c Support repo-level pre-upload hook and prep for future hooks.
All repo-level hooks are expected to live in a single project at the
top level of that project.  The name of the hooks project is provided
in the manifest.xml.  The manifest also lists which hooks are enabled
to make it obvious if a file somehow failed to sync down (or got
deleted).

Before running any hook, we will prompt the user to make sure that it
is OK.  A user can deny running the hook, allow once, or allow
"forever" (until hooks change).  This tries to keep with the git
spirit of not automatically running anything on the user's computer
that got synced down.  Note that individual repo commands can add
always options to avoid these prompts as they see fit (see below for
the 'upload' options).

When hooks are run, they are loaded into the current interpreter (the
one running repo) and their main() function is run.  This mechanism is
used (instead of using subprocess) to make it easier to expand to a
richer hook interface in the future.  During loading, the
interpreter's sys.path is updated to contain the directory containing
the hooks so that hooks can be split into multiple files.

The upload command has two options that control hook behavior:
  - no-verify=False, verify=False (DEFAULT):
    If stdout is a tty, can prompt about running upload hooks if needed.
    If user denies running hooks, the upload is cancelled.  If stdout is
    not a tty and we would need to prompt about upload hooks, upload is
    cancelled.
  - no-verify=False, verify=True:
    Always run upload hooks with no prompt.
  - no-verify=True, verify=False:
    Never run upload hooks, but upload anyway (AKA bypass hooks).
  - no-verify=True, verify=True:
    Invalid

Sample bit of manifest.xml code for enabling hooks (assumes you have a
project named 'hooks' where hooks are stored):
  <repo-hooks in-project="hooks" enabled-list="pre-upload" />

Sample main() function in pre-upload.py in hooks directory:
  def main(project_list, **kwargs):
    print ('These projects will be uploaded: %s' %
           ', '.join(project_list))
    print ('I am being a good boy and ignoring anything in kwargs\n'
           'that I don\'t understand.')
    print 'I fail 50% of the time.  How flaky.'
    if random.random() <= .5:
      raise Exception('Pre-upload hook failed.  Have a nice day.')

Change-Id: I5cefa2cd5865c72589263cf8e2f152a43c122f70
2011-03-11 11:53:23 -08:00
835cd6888f Post-nonexistent-revision crash sidestepped
Fix for the bug that leaves a fractional .git directory after attempting to
perform an initial sync to a nonexistent revision. Moved the initialization of
the working directory to after the revision ID has already been checked. Now,
no project/.git directory gets created at all if the revision ID is bad.

Change-Id: I0c9b2a59573410f1d11de7661591bf02e4ce326b
2011-03-08 13:48:24 -08:00
8ced8641c8 Renamed 'repo_hooks' function to '_ProjectHooks'.
This renaming was done for two reasons:
1. The hooks are actually project-level hooks, not repo-level
   hooks.  Since we are talking about adding repo-level hooks,
   It keeps things less confusing if we name the existing hooks
   to be "ProjectHooks"
2. The function is a private function in project.py and so
   should have capitalization to match.

I also added a docstring describing this function.

Change-Id: I1d30f5de08e8f9f99f78146e68c76f906782d97e
2011-02-01 09:57:29 -08:00
2536f80625 Fixed bug identifying 'commit-msg' files.
There was a minor typo that would cause repo to (I believe)
mistakenly identify any file that contained a substring of the
word 'commit-msg' as a commit message hook.  For example, the file
'mit' or the file 'msg' would be treated as a commit message hook.
I believe that it was intended that repo only recognize files
named exactly 'commit-msg'.

Change-Id: I93edbddf3da3cf0935641e6efb19b0a8ee6e2308
2011-02-01 09:53:56 -08:00
0ce6ca9c7b Fix mirror clients with no worktree
Commit "Make path references OS independent" (df14a70c45)
broke mirror clients by trying to invoke replace() on None
when there is no worktree.

Change-Id: Ie0a187058358f7dcdf83119e45cc65409c980f11
2011-01-10 13:26:34 -08:00
0fc3a39829 Bump repo version to 1,10
Change-Id: Ifdc041e7152af31de413b9269f20000acd945b3b
2011-01-10 09:01:24 -08:00
c7c57e34db help: Don't show empty Summary or Description sections
Signed-off-by: Shawn O. Pearce <sop@google.com>
(cherry picked from commit 60e679209a)
2011-01-09 17:39:22 -08:00
0d2b61f11d sync: Run git gc --auto after fetch
Users may wind up with a lot of loose object content in projects they
don't frequently make changes in, but that are modified by others.

Since we bypass many git code paths that would have otherwise called
out to `git gc --auto`, its possible for these projects to have
their loose object database grow out of control.  To help prevent
that, we now invoke it ourselves during the network half of sync.

Signed-off-by: Shawn O. Pearce <sop@google.com>
(cherry picked from commit 1875ddd47c)
2011-01-09 17:39:22 -08:00
2bf9db0d3b Add "repo branch" as an alias for "repo branches"
For those of us that are used to typing "git branch".

Signed-off-by: Mike Lockwood <lockwood@android.com>
(cherry picked from commit 33f0e786bb)
2011-01-09 17:39:22 -08:00
f00e0ce556 upload: Catch and cleanly report connectivity errors
Instead of giving a Python backtrace when there is a connectivity
problem during repo upload, report that we cannot access the host,
and why, with a halfway decent error message.

Bug: REPO-45
Change-Id: I9a45b387e86e48073a2d99bd6d594c1a7d6d99d4
Signed-off-by: Shawn O. Pearce <sop@google.com>
(cherry picked from commit d2dfac81ad)
2011-01-09 17:39:22 -08:00
1b5a4a0c5d forall: Silently skip missing projects
If a project is missing locally, it might be OK to skip over it
and continue running the same command in other projects.

Bug: REPO-43
Change-Id: I64f97eb315f379ab2c51fc53d24ed340b3d09250
Signed-off-by: Shawn O. Pearce <sop@google.com>
(cherry picked from commit d4cd69bdef)
2011-01-09 17:39:22 -08:00
de8b2c4276 Fix to display the usage message of the command download when the user
don't provide any arguments to 'repo download'.

Signed-off-by: Thiago Farina <thiago.farina@gmail.com>
(cherry picked from commit 840ed0fab7)
2011-01-09 17:39:22 -08:00
727ee98a40 Use os.environ.copy() instead of dict()
Signed-off-by: Shawn O. Pearce <sop@google.com>
(cherry picked from commit 3218c13205)
2011-01-09 17:39:22 -08:00
df14a70c45 Make path references OS independent
Change-Id: I5573995adfd52fd54bddc62d1d1ea78fb1328130
(cherry picked from commit b0f9a02394)

Conflicts:

	command.py
2011-01-09 17:39:19 -08:00
f18cb76173 Encode the environment variables passed to git
Windows allows the environment to have unicode values.
This will cause Python to fail to execute the command.

Change-Id: I37d922c3d7ced0d5b4883f0220346ac42defc5e9
Signed-off-by: Shawn O. Pearce <sop@google.com>
2011-01-09 16:13:56 -08:00
d3fd537ea5 Exit with statuscode 0 for repo help init
The complete help text is printed, so the program executed successfully.

Some tools (like OpenGrok) detects the availibility of a program by
running it with a known set of options and check the return code.
It is an easy and portable way of checking for the existence of a program
instead of searching the path (and handle extensions) ourselves.

Change-Id: Ic13428c77be4a36d599ccb8c86d893308818eae3
2011-01-09 16:10:04 -08:00
0048b69c03 Fixed race condition in 'repo sync -jN' that would open multiple masters.
This fixes the SSH Control Masters to be managed in a thread-safe
fashion.  This is important because "repo sync -jN" uses threads to
sync more than one repository at the same time.  The problem didn't
show up earlier because it was masked if all of the threads tried to
connect to the same host that was used on the "repo init" line.
2010-12-21 13:39:23 -08:00
2b8db3ce3e Added feature to print a <notice> from manifest at the end of a sync.
This feature is used to convey information on a when a branch has
ceased development or if it is an experimental branch with a few
gotchas, etc.

You add it to your manifest XML by doing something like this:
<manifest>
  <notice>
    NOTE TO DEVELOPERS:
      If you checkin code, you have to pinky-swear that it contains no bugs.
      Anyone who breaks their promise will have tomatoes thrown at them in the
      team meeting.  Be sure to bring an extra set of clothes.
  </notice>

  <remote ... />
  ...
</manifest>

Carriage returns and indentation are relevant for the text in this tag.

This feature was requested by Anush Elangovan on the ChromiumOS team.
2010-11-01 15:08:06 -07:00
5df6de075e sync: Use --force-broken to continue other projects
This adds a new flag -f/--force-broken that will allow the rest of
the sync process to continue instead of bailing when a particular
project fails to sync.

Change-Id: I23680f2ee7927410f7ed930b1d469424c9aa246e
Signed-off-by: Andrei Warkentin <andreiw@motorola.com>
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-10-29 12:20:01 -07:00
a0de6e8eab upload: Remove --replace option
It hasn't been necessary for a long time, and its
functionality can be accomplished with 'git push'.

Change-Id: Ic00d3adbe4cee7be3955117489c69d6e90106559
2010-10-29 12:12:56 -07:00
16614f86b3 sync --quiet: be more quiet
Change-Id: I5e8363c7b32e4546d1236cfc5a32e01c3e5ea8e6
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-10-29 12:08:57 -07:00
88443387b1 sync: Enable use of git clone --reference
Use git clone to initialize a new repository, and when possible
allow callers to use --reference to reuse an existing checkout as
the initial object storage area for the new checkout.

Change-Id: Ie27f760247f311ce484c6d3e85a90d94da2febfc
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-10-29 12:08:50 -07:00
99482ae58a Only delete corrupt pickle config files if they exist
os.remove() raises OSError if the file being removed doesn't exist.
Check before calling to ensure we don't raise a useless exception
on an already deleted file.

Change-Id: I44c1c7dd97a47fcab8afb6c18fdf179158b6dab7
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-10-29 08:25:04 -07:00
ec1df9b7f6 Don't allow git fetch to start ControlMaster
To avoid connectivity problems, we don't want the ssh process
that is started by git fetch to become a ControlMaster for the
overall sync task.  If it did, we would lose connectivity when
git fetch was finished with the current project, causing later
projects to not fetch efficiently.

Change-Id: I8d0dcf9b361276ff8c8b5a6324cbd4a501e9c4dd
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-10-29 08:15:14 -07:00
06d029c1c8 Check for existing SSH ControlMaster
Be more thorough about checking for an existing ssh master by
running a test command first, and only opening up a new master
if the test fails to connect.

Change-Id: I56fe8e7b4dbc123675b7f259e81d359ed0cd55cf
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-10-29 08:14:56 -07:00
b715b14807 Fix for handling values of EDITOR which contain a space.
The shell swallows the 0th arg, which was the filename. Simple fix
is to pass in an extra arg for the shell to swallow.

Change-Id: Iad6304ba9ccea6e7262ee06ef87d3dac57dbde81
2010-08-06 17:05:04 -07:00
60829ba72f upload: Fix --replace flag
--replace started to fail due to a Python error, I forgot to pass
through the opt structure to the replace function.

Change-Id: Ifcd7a0c715c3fd9070a4c58208612a626382de35
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-16 07:42:45 -07:00
a22f99ae41 rebase: Pass through more options
Passing through --whitespace=fix to rebase can be useful
to clean up a branch prior to uploading it for review.

Change-Id: Id85f1912e5e11ff9602e3b342c2fd7441abe67d7
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-15 17:43:02 -07:00
3575b8f8bd upload: Allow review.HOST.username to override email
Some users might need to use a different login name than the local
part of their email address for their Gerrit Code Review user
account.  Allow it to be overridden with the review.HOST.username
configuration variable.

Change-Id: I714469142ac7feadf09fee9c26680c0e09076b75
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-15 17:03:19 -07:00
a5ece0e050 upload -t: Automatically include local branch name
If the -t flag is given to upload, the local branch name is
automatically sent to Gerrit Code Review as the topic branch name
for the change(s).  This requires the server to be Gerrit Code
Review v2.1.3-53-gd50c94e or later, which isn't widely deployed
right now, so the default is opt-out.

Change-Id: I034fcacb405b7cb909147152db427fe69dd7bcbf
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-15 16:52:42 -07:00
cc50bac8c7 Warn users before uploading if there are local changes
Change-Id: I231d7b6a3211e9f5ec71a542a0109b0c195d5e40
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-15 16:43:58 -07:00
0cb1b3f687 sync: Try fetching a tag as a last resort before giving up
If a tagged commit is not reachable by the fetch refspec configured
for the git (usually refs/heads/*) it will not be downloaded by
'git fetch'.  The tag can however be downloaded with 'git fetch
--tags' or 'git fetch tag <tag>'.

This patch fixes the situation when a tag is not found after a
'git fetch'. Repo will issue 'git fetch tag <tag>' before giving
up completely.

Change-Id: I87796a5e1d51fcf398f346a274b7a069df37599a
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-15 16:38:08 -07:00
9e426aa432 rebase: Automatically rebase branch on upstrea
Usage: repo rebase [[-i] <project>...]

Rebases the current topic branch of the specified (or all)
projects against the appropriate upstream.

Note: Interactive rebase is currently only supported when
exactly one project is specified on the command line.

Change-Id: I7376e35f27a6585149def82938c1ca99f36db2c4
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-15 16:35:31 -07:00
08a3f68d38 upload: Automatically --cc folks in review.URL.autocopy
The upload command will read review.URL.autocopy from the project's
configuration and append the list of e-mails specified to the
--cc argument of the upload command if a non-empty --re argument
was provided.

Change-Id: I2424517d17dd3444b20f0e6a003be6e70b8904f6
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-07-15 16:30:32 -07:00
feb39d61ef Fix format string bugs in grep
This fixes some format string bugs in grep which cause repo to with
"TypeError: not enough arguments for format string" when grepping and
the output contains a valid Python format string.

Change-Id: Ice8968ea106148d409490e4f71a2833b0cc80816
2010-06-17 19:09:37 -07:00
7198572dd7 Do not invoke ssh with -p argument when no port has been specified.
This change allows local SSH configuration to choose the port number
to use when not explicitly set in the manifest.

(cherry picked from commit 4c0f670465)

Change-Id: Ibea99cfe46b6a2cc27f754cc3944a2fe10f6fda4
2010-06-08 11:08:11 -07:00
2daf66740b Allow files to be copied into new folders
Change-Id: I7f169e32be5a4328bb87ce7c2ff4b6529e925126
2010-05-27 18:05:26 -07:00
f4f04d9fa8 Do not emit progress if stderr is not a tty
Avoids logging progress data into cron logs, etc.

Suggested-by: Michael Richardson <mcr@sandelman.ottawa.on.ca>
Change-Id: I4eefa2c282f0ca0a95a0185612b52e2146669e4c
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-05-27 16:48:36 -07:00
18afd7f679 sync: support --jobs to fetch projects simultaneously
This patch does two things for being compatibile with
those Python which are built without threading support:

1. As the Python document and Shawn suggested, import dummy_threading
   when the threading is not available.

2. Reserve the single threaded code and make it default.
   In cases the --jobs does not work properly with dummy_threading,
   we still have a safe fallback.

Change-Id: I40909ef8e9b5c22f315c0a1da9be38eed8b0a2dc
2010-05-27 14:54:20 -07:00
6623b21e10 Aliasing sync -s to 'smartsync'
This alias will let people use this command without having to
remember the option.

Change-Id: I3256d9e8e884c5be9e77f70e9cfb73e0f0c544c6
2010-05-17 09:58:55 -07:00
ca8c32cd7a sync: kill git fetch process before SSH control master process
If the SSH control master process is killed while an active git
fetch is using its network socket, the underlying SSH client may
not realize the connection was broken.  This can lead to both the
client and the server waiting indefinitely for network messages
which will never be sent.

Work around the problem by keeping track of any processes that use
the tunnels we establish.  If we are about to kill any of the SSH
control masters that we started, ensure the clients using them are
successfully killed first.

Change-Id: Ida6c124dcb0c6a26bf7dd69cba2fbdc2ecd5b2fc
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-05-11 18:31:47 -07:00
f0a9a1a30e upload: Move confirmation threshold from 3 to 5 commits
Change-Id: I7275d195cf04f02694206b9f838540b0228ff5e1
2010-05-05 09:20:51 -07:00
879a9a5cf0 upload: Confirm unusually large number of uploaded commit
Add a sentinel check to require a second explicit confirmation if the
user is attempting to upload (or upload --replace) an unusually large
number of commits.  This may help the user to catch an accidentally
incorrect rebase they had done previously.

Change-Id: I12c4d102f90a631d6ad193486a70ffd520ef6ae0
2010-05-04 17:15:37 -07:00
ff6929dde8 branches: Enable output of multiple projects
Fixes a bug introduced by 498a0e8a79
("Make 'repo branches -a' the default behavior").

Change-Id: Ib739f82f4647890c46d7c9fb2f2e63a16a0481de
2010-05-04 07:51:28 -07:00
1c85f4e43b Rename _ssh_sock() to fix code style issue.
Since _ssh_sock is imported out of the git_command module, the leading
underscore should be removed from the function name.
2010-04-27 14:35:27 -07:00
719965af35 Override manifest file only after it is fully written to disk.
We called "Override()" before closing the file passed in argument.

Change-Id: I15adb99deb14297ef72fcb1b0945eb246f172fb0
2010-04-26 11:20:22 -07:00
5732e47ebb Strip refs/heads in the branch sent to the manifest server.
The manifest server doesn't want to have refs/heads passed to it, so
we need to strip that when the branch contains it.

Change-Id: I044f8a9629220e886fd5e02e3c1ac4b4bb6020ba
2010-04-26 11:19:07 -07:00
f3fdf823cf sync: Safely skip already deleted projects
Do not error if a project is missing on the filesystem, is deleted
from manifest.xml, but still exists in project.list.

Change-Id: I1d13e435473c83091e27e4df571504ef493282dd
2010-04-14 14:21:50 -07:00
a1bfd2cd72 Add a 'smart sync' option to repo sync
This option allows the user to specify a manifest server to use when
syncing. This manifest server will provide a manifest pegging each
project to a known green build. This allows developers to work on a
known good tree that is known to build and pass tests, preventing
failed builds to hamper productivity.

The manifest used is not "sticky" so as to allow subsequent
'repo sync' calls to sync to the tip of the tree.

Change-Id: Id0a24ece20f5a88034ad364b416a1dd2e394226d
2010-04-13 10:20:37 -07:00
6d7508b3d5 Allow 'y' as a valid response when confirming identity
I prefer having to type only one character rather than all three,
and it seems like other confirmation prompts use the same style.
2010-04-01 11:30:56 -07:00
9452e4ec09 Automatically install Gerrit Code Review's commit-msg hook
Most users of repo are also using Gerrit Code Review, and will want
the commit-msg hook to be automatically installed into their local
projects so that Change-Ids are assigned when commits are created,
not when they are first uploaded.

(cherry picked from commit a949fa5d20
 but squashed with latest hook script from version 2.1.2)

Change-Id: Ie68b2d60ac85d8c2285d2e1e6a4536eb76695547
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-03-06 19:21:00 -08:00
4c50deea28 Fail sync when encountering "N commits behind."
This is almost always something the user needs to address
before continuing work, so promoting it to a failure (rather
than simply an informational message) seems the right way to
go. As a side-effect, repo will now exit with a non-zero
status code in this situation, so pipelines of the form
`repo sync && make` will fail if there are branches that
are stalled due to uploaded but unmerged patches.
2010-03-04 11:56:38 -05:00
d63060fc95 Check that we are not overwriting a local repository when syncing.
If a local git repository exists within the same folder as a new project that
is added, when the user syncs the repo, the sync will overwrite the local
files under the project's .git repository with its own symlinks. Make sure
that we do not overwrite 'normal' files in repo and throw an error when
that happens.
2010-01-20 10:27:50 -08:00
b6ea3bfcc3 Honor url.insteadOf when setting up SSH control master connection
Repo can now properly handle url.insteadOf sections in the
user's ~/.gitconfig file.  This means that a user can now enjoy
the master-ssh functionality even if he/she uses insteadOf's in
~/.gitconfig to rewrite git:// URLs to ssh:// style URLs.

Change-Id: Ic0f04a9c57206a7b89eb0f10bf188c4c483debe3
Signed-off-by: Shawn O. Pearce <sop@google.com>
2010-01-04 05:38:39 -08:00
aa4982e4c9 sync: Fix split call on malformed email addresses
If an email address in a commit object contains a space, like a few
malformed ones on the Linux kernel, we still want to split only on
the first space.

Unfortunately my brain was too damaged by Perl and originally wrote
the split asking for 2 results; in Python split's argument is how
many splits to perform.  Here we want only 1 split, to break apart
the commit identity from the email address on the same line.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-12-30 18:38:27 -08:00
9bb1816bdc Fixing project renaming bug.
This bug happens when a project gets added to the manifest, and
then is renamed. Users who happened to have run "repo sync" after
the project was added but before the rename happened will try to
read the data from the old project, as the manifest was only updated
after all projects were updated successfully.
2009-12-10 15:24:45 -08:00
c24c720b61 Fix error parsing a non-existant configuration file
If a file (e.g. ~/.gitconfig) does not exist, we get None
here rather than a string.  NoneType lacks rstrip() so we
cannot strip it.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-07-02 16:12:57 -07:00
2d1a396897 Document how to contribute to the repo project
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-07-02 13:18:55 -07:00
1dcb58a7d0 Support GIT_EDITOR='vim -c "set textwidth=80"'
If there are shell special characters in the editor string, we must
use /bin/sh to parse and execute it, rather than trying to rely on
a simple split(' ').  This avoids vim starting up with two empty
buffers, due to a misparsed command line.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-07-02 12:45:47 -07:00
37dbf2bf0f Try to prevent 'repo sync' as a user name
When someone copies and pastes a setup line from a web page,
they might actually copy 'repo sync' onto the clipboard and wind
up pasting it into the "Your Name" prompt.  This means they will
initialize their client with the user name of "repo sync", creating
some rather funny looking commits later on.  For example:

  To setup your source tree:

    mkdir ~/code
    cd ~/code
    repo init -u git://....
    repo sync

If this entire block was just blindly copy and pasted into the
terminal, the shell won't read "repo sync" but "repo init" will.

By showing the user their full identity string, and asking them
to confirm it before we continue, we can give the hapless user a
chance to recover from this mistake, without unfairly harming those
who were actually named 'repo' by their parents.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-07-02 10:53:04 -07:00
438c54713a git_config: handle configuration entries with no values
A git-config entry with no value was preventing repo
from initializing.  This modifies _ReadGit() to handle
config entries with empty values.

Signed-off-by: David Aguilar <davvid@gmail.com>
Reported-by: Josh Guilfoyle <jasta00@gmail.com>
2009-06-29 00:24:36 -07:00
e020ebee4e .gitignore: add an entry for repopickles
Signed-off-by: David Aguilar <davvid@gmail.com>
2009-06-28 15:08:56 -07:00
21c5c34ee2 Support detached HEAD in manifest repository
If the manifest repository is on a detached HEAD and we are parsing
an XML formatted manifest we should simply set the branch property
to None, rather than crash with an AttributeError.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-25 16:47:30 -07:00
54fccd71fb Document any crashes from the user's text editor
Rather than failing with no information, display the child exit
status and the command line we tried to use to edit a text file.
There may be some useful information to help understand the crash.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-24 07:15:21 -07:00
fb5c8fd948 Fix invalid use of try-catch
Its try-except in Python.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-16 14:59:19 -07:00
26120ca18d Don't crash if the ssh client is already dead
If the SSH client terminated abnormally in the background (e.g. the
server shutdown while we were doing a sync) then the pid won't exist.
Instead of crashing, ignore it, the result we wanted (a non-orphaned
ssh process) is already acheived.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-16 11:49:10 -07:00
7da73d6f3b branches: Describe output format in repo help branches
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-12 17:35:43 -07:00
f0d4c36701 grep: Only use --color on git 1.6.3 and later
The --color flag wasn't introduced until git 1.6.3.  Prior to that
version, `git grep --color` just produces a fatal error, as it is
an unsupported option.  Since this is just pretty output and is not
critical to execution, we can simply omit the option if the version
of git we are running on doesn't support it.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-12 09:33:48 -07:00
2ec00b9272 Refactor git version detection for reuse
This way we can use it to detect feature support in the underlying
git, such as new options or commands that have been added in more
recent versions.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-12 09:32:50 -07:00
2a3a81b51f Ignore EOFError when reading a truncated pickle file
If the pickle config file is 0 bytes in length,  we may have
crashed (or been aborted) while writing the file out to disk.
Instead of crashing with a backtrace, just treat the file as
though it wasn't present and load off a `git config` fork.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-12 09:10:07 -07:00
7b4f43542a Add missing return False to preconnect
Noticed by users on repo-discuss, we were missing a return False
here to signal that SSH control master was not used to setup the
network connection.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-12 09:08:34 -07:00
9fb29ce123 sync: Keep the project.list file sorted
Its easier to locate an entry visually if the file is sorted.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-04 20:41:26 -07:00
3a68bb4c7f sync: Tolerate blank lines in project.list
If a line is blank in project.list, its not a relevant project path,
so skip over it.  Existing project.list files may have blank lines if
sync was run with no projects at all, and the file was created empty.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-04 16:21:01 -07:00
cd1d7ff81e sync: Don't process project.list in a mirror
We have no working tree, so we cannot update the project.list
state file, nor should we try to delete a directory if a project is
removed from the manifest.  Clients would still need the repository
for historical records.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-04 16:20:02 -07:00
da88ff4411 Silence 'Current branch %s is up to date' during sync
We accidentally introduced this message during 1.6.8 by always
invoking `git rebase` when there were no new commits from the
upstream, but the user had local commits.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-03 11:09:31 -07:00
8135cdc53c Delete empty parent subdirs after deleting obsolete paths.
After sync, we delete obsolete project paths.
Iterate and delete parent subdirs which are empty.
Tested on projects within subdirectories.
2009-06-02 15:08:45 -07:00
4f2517ff11 Update project paths after sync.
After a repo sync, some of the project paths might need
to be removed. This changes maintains a list of project
paths from the previous sync operation and compares.
2009-06-02 11:00:53 -07:00
fe200eeb52 Fix unnecessary self in project.py
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-06-01 15:28:21 -07:00
078a8b270f Add PyDev project files to repo 2009-06-02 00:09:07 +02:00
3c8dea1f8d Change project.revision to revisionExpr and revisionId
The revisionExpr field now holds an expression from the manifest,
such as "refs/heads/master", while revisionId holds the current
commit-ish SHA-1 of the revisionExpr.  Currently that is only
filled in if the manifest points directly to a SHA-1.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-29 18:45:20 -07:00
8ad8a0e61d Change DWIMery hack for dealing with rewound remote branch
The trick of looking at the reflog for the remote tracking branch
and only going back one commit works some of the time, but not all of
the time.  Its sort of relying on the fact that the user didn't use
`repo sync -n` or `git fetch` to only update the tracking branches
and skip the working directory update.

Doing this right requires looking through the history of the SHA-1
source (what the upstream used to be) and finding a spot where the
DAG diveraged away suddenly, and consider that to be the rewind
point.  That's really difficult to do, as we don't have a clear
picture of what that old point was.

A close approximation is to list all of the commits that are in
HEAD, but not the new upstream, and rebase all of those where the
committer email address is this user's email address.  In most cases,
this will effectively rebase only the user's new original work.

If the user is the project maintainer and rewound the branch
themselves, and they don't want all of the commits they have created
to be rebased onto the new upstream, they should handle the rebase
on their own, after the sync is complete.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-29 18:45:17 -07:00
d1f70d9929 Refactor how projects parse remotes so it can be replaced
We now feed Project a RemoteSpec, instead of the Remote directly
from the XmlManifest.  This way the RemoteSpec already has the
full project URL, rather than just the base, permitting other
types of manifests to produce the URL in their own style.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-29 09:31:28 -07:00
c8a300f639 Refactor Manifest to be XmlManifest
We'll soon be supporting two different manifest formats, but we
can't immediately remove support for the current XML one that is
in wide spread use within Android.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-29 09:31:28 -07:00
1b34c9118e Allow callers of GitConfig to specify the pickle file path
This way we can put it in another directory than the config file
itself, e.g. hide it inside ".git" when parsing a ".gitmodules"
file from the working tree.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-29 09:31:00 -07:00
366ad214b8 Teach GitConfig how to yield subsection names
This can be useful when pulling apart a configuration file, like
finding all entries which match submodule.*.*.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-19 13:02:00 -07:00
242b52690d Remove support for the extra <remote> definitions in manifests
These aren't that widely used, and actually make it difficult for
users to fully mirror a forest of repositories, and then permit
someone else to clone off that forest, rather then the original
upstream servers.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-19 13:01:52 -07:00
4cc70ce501 Remove unused parsing support for <require commit=""/>
We haven't supported this in a while, but the parser was still here.
Its all dead code, so strip it out.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-19 13:01:48 -07:00
498a0e8a79 Make 'repo branches -a' the default behavior
Extensive discussion with users lead to the fact that needing to
supply -a to view what they really wanted to see was just wrong.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-05-18 12:28:57 -07:00
bc7ef67d9b Automatically guess Gerrit change number in "repo upload --replace"
This feature only works if you have one commit to replace right now
(the common case).
2009-05-05 15:01:18 -07:00
2f968c943b Fix ssh://user@hostname/ style URLs parsing
I only tested this with ssh://hostname/ style URLs, so I failed
to test ssh://user@hostname/ format, which failed if the hostname
portion was longer than 1 character.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-30 14:30:28 -07:00
2b5b4ac292 Disable SSH ControlMaster option on Cygwin
Bug: REPO-29
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-23 17:22:18 -07:00
6f6cd77a50 Require a project or '--all' to be specified when using 'repo start'. 2009-04-22 18:05:50 -07:00
896d5dffd3 Fix UnboundLocalError: local variable 'port' when using SSH
If the SSH URL doesn't contain a port number, but uses the ssh://
or git+ssh:// syntax we raised a Python runtime error due to the
'port' local variable not being assigned a value.  Default it to
the IANA assigned port for SSH, 22.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-21 14:51:04 -07:00
9360966bd2 Perform copy file activity when creating a new work directory
Performance improvements in repo sync caused us to skip out of the
initial Sync_LocalHalf without ever running CopyFiles, so we didn't
create the top level Makefile in new clients whose manifest request
one with a <copyfile> element.

Now we run CopyFiles after the initial read-tree that populates
the project working directory.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-21 10:54:59 -07:00
ef9ce1d0a5 Change -p command to use stdout instead of stderr. 2009-04-21 10:00:16 -07:00
05f66b6836 Fix 'repo sync' rebase logic on a published branch
If the current branch is published, but all published commits are
merged into the manifest revision, but there is also at least one
unpublished commit on the current branch, we should rebase the
unpublished commit, rather than creating a merge commit.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-21 08:28:06 -07:00
eb7af87bcf Document the SSH ControlMaster behavior of repo sync
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-21 08:28:06 -07:00
938d608c9c Support a level 2 heading in help description text
The level 2 headings (denoted by ~) indent the heading two spaces,
but continue to use the bold formatter to offset them from the
other surrounding text.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-21 08:28:06 -07:00
d63bbf44dc Work around 'ControlPath too long' on Mac OS X
Mac OS X sets TMPDIR to a very long path within /var, so long
that a socket created in that location is too big for a struct
sockaddr_un on the platform, resulting in OpenSSH being unable
to create or bind to a socket in that location.

Instead we try to use the very short and very common /tmp, but
fall back to the guessed default if /tmp does not exist.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-21 08:05:27 -07:00
a8421a128a Fix launching of editor under 'repo upload --replace'
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 16:57:46 -07:00
fb2316146f Automatically use SSH control master support during sync
By creating a background ssh "control master" process which lives
for the duration of our sync cycle we can easily cut the time for
a no-op sync of 132 projects from 60s to 18s.

Bug: REPO-11
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 16:50:47 -07:00
8bd5e60b16 Make 'repo status' show the branch you are currently on
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 15:31:36 -07:00
3d2cdd0ea5 Highlight projects which still have sync failures during 'repo status'
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 15:26:10 -07:00
4e3d6739a1 Print '(no branches)' if the output of repo branches is empty
This way its clear the command did something, and reported
that it had nothing to show you, because you have no active
branches in this client.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 15:18:35 -07:00
552ac89929 Modify 'repo abandon' to be more like 'repo checkout' and 'repo start'
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 15:15:24 -07:00
89e717d948 Improve checkout performance for the common unmodified case
Most projects will have their branch heads matching in all branches,
so switching between them should be just a matter of updating the
work tree's HEAD symref.  This can be done in pure Python, saving
quite a bit of time over forking 'git checkout'.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 15:04:41 -07:00
0f0dfa3930 Add progress meter to 'repo start'
This is mostly useful if the number of projects to switch is many
(e.g. all of Android) and a large number of them are behind the
current manifest revision.  We wind up needing to run git just to
make the working tree match, and that often makes the command take
a couple of seconds longer than we'd like.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 14:53:39 -07:00
76ca9f8145 Make usage of open safer by setting binary mode and closing fds
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 14:48:03 -07:00
accc56d82b Speed up 'repo start' by removing some forks
Its quite common for most projects to be matching the current
manifest revision, as most developers only modify one or two projects
at any one time.  We can speed up `repo start foo` (that impacts
the entire client) by performing most of the branch creation and
switch operations in pure Python, and thus avoid 4 forks per project.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 14:45:51 -07:00
db45da1208 Add -p to repo forall to improve output formatting
When trying to read log output from many projects at once it can
be difficult to make sense of which messages came from where.

For many professional developers it is common to want to view the
last week's worth of your work, so you can write a weekly summary
of your activity for your status report.

This is easier with the new -p option:

  repo forall -pc git log --reverse --since=1.week.ago --author=sop

produces a report of all commits written by me in the last week,
formatted in a paged output display, with headers inserted in
front of each project's output.

Where this can be even more useful is with git log's pickaxe,
e.g. now we can use:

  repo forall -pc git log -Sbar v1.0..v1.1

to locate all additions or removals of the symbol 'bar' since v1.0,
up to and including v1.1.  Before displaying the matching commits in
a project, a project header is shown, giving the user some context
information for the matching results.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 13:49:13 -07:00
50fa1ac6db Clarify the option section header in 'repo help grep'
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:44:33 -07:00
5da554f294 Show options help after the summary for a command
It is a bit clearer to read this way.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:44:00 -07:00
77bb4af241 Improve the help text for 'repo init'
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:33:32 -07:00
fd89b67f5c Clarify options that control the repo executable version
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:28:57 -07:00
a490f03dc2 Correct note about local_manifest.xml capabilities
With the <remove-project> element we can remove projects, and
fully replace them with a different definition.  So this note
is out of date.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:25:58 -07:00
deec0536d6 Only display project path in 'repo stage -i'
Generally we only show the project path, relative from the top of the
client.  Showing the project name may be confusing for the end-user.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:22:13 -07:00
06e556d202 Improve the help text for 'repo start'
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:19:01 -07:00
8225cdc56b Display the URL we will upload changes to for review
This gives the user the last chance to confirm where the change is
going to be sent to.  Knowing the review server URL will help the
user decide if continuing with the upload makes sense.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 11:00:35 -07:00
337fb9c7e9 Improve the help text for 'repo upload'
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 10:59:33 -07:00
9bb9617858 Remove unused methods from project.ReviewableBranch
These used to be used back when we had Gerrit 1.x support and used
HTTP based uploads to transmit changes for review.  Since we moved
entirely to Gerrit 2.x, these are no longer called.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 10:53:27 -07:00
f690687671 Only fetch repo once-per-day under normal 'repo sync' usage
Its unlikely that a new version of repo will be delivered in any
given day, so we now check only once every 24 hours to see if repo
has been updated.  This reduces the sync cost, as we no longer need
to contact the repo distribution servers every time we do a sync.

repo selfupdate can still be used to force a check.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 10:49:00 -07:00
336f7bd0ed Avoid git fork on the common case of repo not changing
Usually repo is upgraded only once a week, if that often.  Most of
the time we invoke HasChanges on the repo project (or even on the
manifest project) the current HEAD will resolve to the same SHA-1
as the remote tracking ref, and there are therefore no changes.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 10:39:28 -07:00
2810cbc778 Only display a progress meter once we spend 0.5 seconds on a task
The point of the progress meter is to let the user know that the
task is progressing, and give them a chance to estimate when it will
be complete.  If the task completes in under 0.5 seconds then it
is sufficiently fast enough that the user doesn't need to be kept
up-to-date on its progress; in fact showing the meter may just slow
the task down waiting on the tty to redraw.

We now delay the progress meter 0.5 seconds (or 1 second if the
Python time.time() function isn't accurate enough) to avoid any
really fast tasks, like a no-op local sync.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 10:09:16 -07:00
6ed4e28346 Disable the progress meter when trace is enabled
The trace output often interfers with the progress meter, so its
easier to just disable the progress meter if trace is active.
Its already verbose enough to let the user know we are working,
which is all the progress meter is there for anyway.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 09:59:18 -07:00
ad3193a0e5 Fix repo --trace to show ref and config loads
The value of the varible TRACE was copied during the import, which
happens before the --trace option can be processed.  So instead we
now use a function to determine if the value is set, as the function
can be safely copied early during import.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-18 09:54:51 -07:00
b81ac9e654 Enable tracing of ref scans and config unpickling
These are not as expensive as spawning a git command, but they are
not free either.  We want to keep track of how many times we wind
up calling them on any particular operation.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 21:03:45 -07:00
0f3dd233ec Avoid unnecessary git symbolic-ref calls during repo sync
If the m/BRANCH ref is already pointing at the value set in the
manifest there is no reason to set it again.  Leave it alone,
thus saving a full fork+exec call.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 21:03:45 -07:00
c12c360f89 Pickle parsed git config files
We now cache the output of `git config --list` for each of our
GitConfig instances in a Python pickle file.  These can be read
back in using only the Python interpreter at a much faster rate
than we can fork+exec the git config process.

If the corresponding git config file has a newer modification
timestamp than the pickle file, we delete the pickle file and
regenerate it.  This ensures that any edits made by the user
will be taken into account the next time we consult the file.

This reduces the time for a no-op repo sync from 0.847s to 0.269s.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 21:03:45 -07:00
fbcde472ca Improve repo sync performance by avoid git forks
By resolving the current HEAD and the manifest revision using pure
Python, we can in the common case of "no changes" avoid a lot of
git operations and directly jump out of the local sync method.

This reduces the no-op `repo sync -l` time for Android's 114 projects
from more than 6s to under 0.8s.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 21:03:45 -07:00
d237b69865 Implement git ref reading purely in Python
Its much faster to read the refs from 114 projects when the reader
is pure Python and just doing file IO than forking 114 git commands
and parsing their output.

The reader caches refs based upon file mtimes.  If any single ref
file has been modified since the last read, we re-read the entire
repository's ref namespace.  This simplifies the code as we don't
need to worry about shooting down symbolic-refs, but it may cause
more IO than is necessary if only one ref gets updated.

This change drops `repo branches` in Android from 1.658s to 0.206s.
Likewise, `repo sync` improves dramatically as well.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 21:03:41 -07:00
5b23f24881 Implement 'git symbolic-ref HEAD' in Python
This is invoked once per project in `repo sync`.  Taking it out
saves about 1/114 of a second, so on a large set of projects like
Android it can save up to a full second of sync time.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 20:59:44 -07:00
66bdd46871 Only compute commits in repo upload if we need to show a prompt
If the user has disabled a prompt, skip the two commands we use to
obtain the list of commits and the date of the branch.  These will
never be displayed and just waste the end-user's time.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 20:54:39 -07:00
a608fb024b Allow review.URL.autoupload to skip prompting during repo upload
If review.URL.autoupload is set to true in a project's .git/config
or in ~/.gitconfig then `repo upload` will automatically upload,
and skip prompting the end-user.

Conversely, if review.URL.autoupload is set to false, then repo
will refuse to upload to that project.

Bug: REPO-25
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 12:11:24 -07:00
f8e3273dec Supporrt mixed case subsection names in Git config files
In the case of:

  [url "Foo"]
    insteadOf = Bar

We should return "Bar" for the key "url.Foo.insteadof", but not
for the key "url.foo.insteadof".  This requires splitting the
key into its components and only lower casing the section and
value name, leaving the subsection portion alone.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 11:00:31 -07:00
006734b798 Remove confusing message from repo sync output
Someone pointed out this message isn't always the truth; so we
shouldn't print it.  The code path is executed when there are
published commits, yet our output talks about unpublished ones.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-17 10:28:25 -07:00
350cde4c4b Change repo sync to be more friendly when updating the tree
We now try to sync all projects that can be done safely first, before
we start rebasing user commits over the upstream.  This has the nice
effect of making the local tree as close to the upstream as possible
before the user has to start resolving merge conflicts, as that extra
information in other projects may aid in the conflict resolution.

Informational output is buffered and delayed until calculation for
all projects has been done, so that the user gets one concise list
of notice messages, rather than it interrupting the progress meter.

Fast-forward output is now prefixed with the project header, so the
user can see which project that update is taking place in, and make
some relation of the diffstat back to the project name.

Rebase output is now prefixed with the project header, so that if
the rebase fails, the user can see which project we were operating
on and can try to address the failure themselves.

Since rebase sits on a detached HEAD, we now look for an in-progress
rebase during sync, so we can alert the user that the given project
is in a state we cannot handle.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-16 11:21:18 -07:00
48244781c2 Refactor error message display in project.py
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-16 08:25:57 -07:00
19a83d8085 Use default rebase during sync instead of rebase -i
rebase interactive (aka rebase -i) has changed in newer versions
of git, and doesn't always generate the sequence of commits the
same way it used to.  It also doesn't handle having a previously
applied commit try to be applied again.

The default rebase algorithm is better suited to our needs.
It uses --ignore-if-in-upstream when generating the patch series
for git-am, and git-am with its 3-way fallback is able to handle
a rename case just as well as the cherry-pick variant used by -m.
Its also a generally faster implementation.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-16 08:23:29 -07:00
b1168ffada Don't divide by zero in progress meter
If there are no projects to fetch, the progress meter would
have divided by zero during `repo sync`, and that throws a
ZeroDivisionError.  Instead we report the progress with an
unknown amount remaining.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-16 08:05:05 -07:00
4c5c7aa74b Document 'repo status' output
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-13 14:06:34 -07:00
ff84fea0bb Fix formatting of 'repo help sync'
The formatting for the enviroment variable section was incorrect.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-13 12:11:59 -07:00
d33f43a754 Cleanup checkout help to match other commands
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-13 12:11:31 -07:00
e756c412e3 Add 'repo selfupdate' to upgrade only repo
Users may want to upgrade only repo to the latest release, but
leave their working tree state alone and avoid 'repo sync'.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-13 11:53:53 -07:00
b812a36236 Add 'repo grep' to support searching all projects
Users can now use 'repo grep' to search all projects, rather than
'repo forall -c git grep'.  Its not only shorter to type, but it
also filters results better by highlighting which projects matched
in the client workspace.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 20:37:47 -07:00
161f445a4d status: tell the user the working tree is clean
If there is nothing output at all, tell the user the working tree is
completely clean.  It just gives them a bit more of a warm-fuzzy
feeling knowing repo and until the end.  It also more closely
matches with the output of git status.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 19:01:08 -07:00
68194f42b0 Add a project progress meter to 'repo sync'
This way users can see how much is left during fetch.  Its
especially useful when most syncs are no-ops but there are
hundreds of repositories to poll.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 19:01:04 -07:00
b1562faee0 Add 'repo sync -l' to only do local operations
This permits usage of 'repo sync' while offline, as we bypass the
network based portions of the code and do only the local sync.

An example use case might be:

  repo sync -n  ; # while we have network
  ... some time later ...
  repo sync -l  ; # while without network, come up to date

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 17:08:02 -07:00
3e768c9dc7 Add 'repo sync -d' to detach projects from their current topic
The -d flag moves the project back to a detached HEAD state,
matching what is listed in the manifest.  This can be useful to
set a client to something stable (or at least well-known), such as
before a sequence of 'repo download' commands are used to get some
changes for testing.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 17:08:02 -07:00
96fdcef9e3 Add 'repo sync -n' to only do the network transfer
This makes it easier to update all repositories, without actually
impacting the working directory, or learning about how to use
`repo forall -c 'git fetch $REPO_REMOTE' `.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 17:07:52 -07:00
2a1ccb2b0c Hide the internal sync --repo-upgraded flag from users
This is only meant to be passed through while repo upgrades itself
during a sync.  It should never be something a user invokes on
their own.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 17:07:32 -07:00
0a389e94de Make 'repo start' restartable upon failures
If `repo start foo` fails due to uncommitted and unmergeable changes
in a single project, we have switched half of the projects over to
the new target branches, but didn't on the one that failed to move.

This change improves the situation by doing three things differently:

- We keep going when we encounter an error, so other projects
  that can successfully switch still switch.

- We ignore projects whose current branch is already on the
  requested name; they are logically already setup.

- We checkout the branch if it already exists, rather than
  trying to recreate the branch.

Bug: REPO-22
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 16:21:18 -07:00
2675c3f8b5 Don't capture stdout during 'repo checkout'
There isn't any great value in buffering stdout into memory
coming from git checkout.  So don't bother doing it.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 16:20:25 -07:00
27b07327bc Add a repo branches subcommand to describe current branches
We now display a summary of the available topic branches in this
client, based upon a sorted union of all existing projects.

Bug: REPO-21
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-10 16:02:48 -07:00
02d7945eb8 Add checkout command.
Teach repo how to checkout a branch in all projects or a list
of specific projects.

Bug: REPO-21
2009-04-10 13:01:24 -07:00
8f82a4f828 Don't start the pager if stdout is a pipe
The repo script often uses a pager by default and will produce
control characters (coloring) to standard output when using the
pager, even if the output is redirected to another pipe or script.
This is because the pager setup checked for the terminal presence
on FD 0, and in case of redirection FD 0 is still attached to
the terminal.

Instead require that both FD 0 and FD 1 are connected to the terminal
in order to start the pager.

Bug: REPO-19, b.android.com/2004
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-04-01 07:24:22 -07:00
146fe902b7 Only lookup review server '/ssh_info' once per repo process
If the user has multiple projects to upload changes to, and they
are all going to the same review server, we only need to query the
'/ssh_info' data once.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-25 14:06:43 -07:00
722acefdc4 Produce a useful error if /ssh_info was HTML and not plain text
If /ssh_info is protected by an HTML based login page, we may get
back a "200 OK" response from the server with some HTML document
asking us to authenticate.  This can't be parsed into a host name
and port number, so we shouldn't even try.

Valid host names and decimal port numbers cannot contain '<', but
an unexpected HTML login page would.  So we test for '<' to give
us a fair indicator that the content isn't what we think it is,
and bail out.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-25 13:58:14 -07:00
13cc3844d7 Handle review URLs pointing directly at Gerrit
If a review URL is set to 'http://host/Gerrit' because the user
thinks that is the correct way to point repo at Gerrit, we should
be a bit more flexible and fix the URL by dropping the '/Gerrit'
suffix and replace it with '/ssh_info'.

Likewise, if a review URL points already at '/ssh_info' for a Gerrit
instance, we should leave it alone.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-25 13:54:54 -07:00
feabbdb440 Don't bother listing branch URLs during upload
Modern Gerrit2 automatically outputs the URL for each commit to
stderr as it creates the records.  Dumping the URL ourselves is
unnecessary additional output, and worse is just an approximate
guess for the correct web URL.  Gerrit might not live at the top
level directory for the server, or might even prefer a different
hostname for web connections than what is listed in the manifest.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-19 10:20:27 -07:00
8630f39dba Fix repo re-init in a mirror to not prompt
On a mirror client we don't prompt for user.name,user.email as the
data is only necessary if you will make new commits.  On a re-init
we were testing the command line option, not the existing IsMirror
property from the manifest configuration file.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-19 10:17:12 -07:00
df01883f9b Allow repo init to restart if URL was initially invalid
This allows the user to run "repo init -u" again after an
initial attempt failed due to an invalid URL.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-17 08:15:27 -07:00
1fc99f4e47 Give a more friendly error in 'repo init' if manifest url is invalid
Instead of a stack trace ending in origin/master not existing we
now tell the user the manifest url is invalid if 'git fetch' has
failed out early.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-17 08:11:51 -07:00
1775dbe176 Set forall environment variables to empty string if None
If the value obtained is None we now set the variable to
'' instead, in an attempt to make execve() happier about
our 3rd argument, the env dictionary.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-17 08:03:04 -07:00
521cd3ce67 Support "repo init -b foo && repo sync" to switch baselines
We now correctly support re-initializing an existing client to point
to a different branch of the same manifest repository, effectively
allowing the client to switch the baseline it is operating on.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-09 18:53:20 -07:00
5470df6219 Don't permit "repo init --mirror" in an existing client
Simply setting repo.mirror true doesn't make a client into a mirror.
The on-disk layout is completely wrong for a mirror repository,
and until we fix our layout for a non-mirror client to more closely
resemble the upstream we can't do anything to easily turn on or
turn off the mirror status flag.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-09 18:51:58 -07:00
0ed2bd1d95 Add global --trace command line option
This has the same effect as saying "export REPO_TRACE=1" in
your shell prior to starting repo, but is documented in the
command usage and perhaps easier to use.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-09 18:26:31 -07:00
c7a4eefa7e Add repo manifest -o to save a manifest
This can be useful to create a new manifest from an existing client,
especially if the client wants to use the "-r" option to set each
project's revision to the current commit SHA-1, making a sort of a
tag file that can be used to recreate this exact state elsewhere.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-05 10:32:38 -08:00
43c3d9ea17 Add a 'repo manifest' command whose help is the manifest file format
This should make it easier for users to discover the file format
on their own, and read about it.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-04 14:26:50 -08:00
4259b8a2ac Tell users how to see the complete list of commands
Using "repo help --all" may not be obvious.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-04 14:03:16 -08:00
2816d4f387 Set core.bare to true on mirror repositories
When creating a mirror repository we will always be using a bare
repository.  Setting $GIT_DIR/config to have core.bare = true is
reasonable and helps Git to recognize the environment it is in.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-03 17:53:18 -08:00
44469464d2 Allow repo forall -c on a mirror by using GIT_DIR as pwd
We can permit a forall on a mirror, but only if we put
the command into the git repository.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-03 17:51:01 -08:00
c95583bf4f Don't permit users to run repo status in a mirror client
If a client was created with "repo init --mirror" then there are
no working directories present, and no files checked out.  Using
a command like "repo status" in this context makes no sense, and
actually throws back a Pytyon traceback at the console when the
underlying commands fail out.

We now tag commands with the MirrorSafeCommand type if they are
able to be executed within a mirror directory safely.  Using a
command in a mirror which lacks this base class results in a
useful error letting you know the command isn't supported.

Bug: REPO-14
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-03 17:47:06 -08:00
6a5644d392 Get rid of the horrible android import work around hack
Months ago when the Android Open Source Project launched we had some
import errors that had to be fixed and worked over.  These hacks
were here to help users update their clients to newer versions of
the imported code.

Its very likely all clients have either been deleted, or have been
updated and have the fixed imports.  So we don't need this hack in
repo anymore.

If a very ancient client still existed, it would need to be created
from scratch anyway, due to the Android cupcake branch merging
into master and the manifest changes not being able to be handled
correctly by repo.  A new client wouldn't have the incorrectly
imported code in it, and thus wouldn't need this hack.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-03 13:52:20 -08:00
fe08675956 Fix repo status when there are renamed/copied files
I missed a parameter in the format string, but still provided the
value in the parameter list, so the format failed to produce an
output message.

Bug: REPO-15
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-03 13:49:48 -08:00
be0e8ac232 Export additional environment variables to repo forall:
REPO_PATH is the path relative the the root of the client.

REPO_REMOTE is the name of the remote system from the manifest.

REPO_LREV is the name of the revision from the manifest, but
translated to something the local repository knows.

REPO_RREV is the name of the revision from the manifest.

This allows us to do commands like:

  repo forall -c 'echo "(cd $REPO_PATH && git checkout `git rev-parse HEAD`)"'
2009-03-02 19:32:28 -08:00
47c1a63a07 Add 'repo version' to describe what code we are running
I meant to have this in here, so clients can more easily report
what version of repo they are running.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-02 18:24:23 -08:00
559b846b17 Report better errors when a project revision is invalid
If a manifest specifies an invalid revision property, give the
user a better error message detaling the problem, instead of an
ugly Python traceback with a strange Git error message.

Bug: REPO-2
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-02 12:56:08 -08:00
7c6c64d463 Fix repo prune output to sort by branch name
We didn't always sort the output.  Now we do.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-02 12:38:45 -08:00
3778f9d47e Fix repo prune to work on git 1.6.1-rc3~5 and later
Prior to git 1.6.1-rc3~5 the output of 'git branch -d' matched:

  Deleted branch (.*)\.

where the subgroup grabbed the branch name. In v1.6.1-rc3~5 (aka
a126ed0a01e265d7f3b2972a34e85636e12e6d34) Brandon Casey changed
the output to include the SHA-1 of the branch name, now matching
the pattern:

  Deleted branch (.*) \([0-9a-f]*\)\.

Instead of parsing the output of git branch we now re-obtain the
list of branches after the deletion attempt and perform a set
difference in memory to determine which branches we were able to
successfully delete.

Bug: REPO-9
Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-03-02 12:38:36 -08:00
993eedf9fa Merge 2009-02-10 11:53:40 -08:00
02e0cdf359 Merge 2009-02-10 11:53:30 -08:00
a8e98a6962 Fix color parsing to not crash when user defined colors are set
We didn't use the right Python string methods to parse colors.

  $ git config --global color.status.added yellow

managed to cause a stack trace due to undefined methods trim()
and lowercase().  Instead use strip() and lower().

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-02-02 16:17:02 -08:00
5ab508cbcc Remove the now unnecessary Makefile
In a pure Python project run directly from source we really don't
have a need for a Makefile.  Previously it held the rule to update
the protobuf client from Gerrit1, but now that we have retired that
logic we don't need it anymore.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-01-26 10:56:45 -08:00
370e3fa666 Remove the protobuf based HTTP upload code path
Now that Gerrit2 has been released and the only supported upload
protocol is direct git push over SSH we no longer need the large
and complex protobuf client library, or the upload chunking logic
in gerrit_upload.py.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-01-26 10:55:39 -08:00
b54a392c9a Support Gerrit2's ssh:// based upload
In Gerrit2 uploads are sent over "git push ssh://...", as this
is a more efficient transport and is easier to code from external
scripts and/or direct command line usage by an end-user.

Gerrit1's HTTP POST based format is assumed if the review server
does not have the /ssh_info URL available on it.

Signed-off-by: Shawn O. Pearce <sop@google.com>
2009-01-05 16:34:27 -08:00
70 changed files with 5744 additions and 7392 deletions

1
.gitignore vendored
View File

@ -1 +1,2 @@
*.pyc
.repopickle_*

17
.project Normal file
View File

@ -0,0 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>repo</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.python.pydev.PyDevBuilder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.python.pydev.pythonNature</nature>
</natures>
</projectDescription>

10
.pydevproject Normal file
View File

@ -0,0 +1,10 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?eclipse-pydev version="1.0"?>
<pydev_project>
<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
<path>/repo</path>
</pydev_pathproperty>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.4</pydev_property>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
</pydev_project>

View File

@ -1,29 +0,0 @@
#
# Copyright 2008 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
GERRIT_SRC=../gerrit
GERRIT_MODULES=codereview froofle
all:
clean:
find . -name \*.pyc -type f | xargs rm -f
update-pyclient:
$(MAKE) -C $(GERRIT_SRC) release-pyclient
rm -rf $(GERRIT_MODULES)
(cd $(GERRIT_SRC)/release/pyclient && \
find . -type f \
| cpio -pd $(abspath .))

79
SUBMITTING_PATCHES Normal file
View File

@ -0,0 +1,79 @@
Short Version:
- Make small logical changes.
- Provide a meaningful commit message.
- Make sure all code is under the Apache License, 2.0.
- Publish your changes for review:
git push https://gerrit-review.googlesource.com/git-repo HEAD:refs/for/maint
Long Version:
I wanted a file describing how to submit patches for repo,
so I started with the one found in the core Git distribution
(Documentation/SubmittingPatches), which itself was based on the
patch submission guidelines for the Linux kernel.
However there are some differences, so please review and familiarize
yourself with the following relevant bits:
(1) Make separate commits for logically separate changes.
Unless your patch is really trivial, you should not be sending
out a patch that was generated between your working tree and your
commit head. Instead, always make a commit with complete commit
message and generate a series of patches from your repository.
It is a good discipline.
Describe the technical detail of the change(s).
If your description starts to get too long, that's a sign that you
probably need to split up your commit to finer grained pieces.
(2) Check the license
repo is licensed under the Apache License, 2.0.
Because of this licensing model *every* file within the project
*must* list the license that covers it in the header of the file.
Any new contributions to an existing file *must* be submitted under
the current license of that file. Any new files *must* clearly
indicate which license they are provided under in the file header.
Please verify that you are legally allowed and willing to submit your
changes under the license covering each file *prior* to submitting
your patch. It is virtually impossible to remove a patch once it
has been applied and pushed out.
(3) Sending your patches.
Do not email your patches to anyone.
Instead, login to the Gerrit Code Review tool at:
https://gerrit-review.googlesource.com/
Ensure you have completed one of the necessary contributor
agreements, providing documentation to the project maintainers that
they have right to redistribute your work under the Apache License:
https://gerrit-review.googlesource.com/#/settings/agreements
Ensure you have obtained an HTTP password to authenticate:
https://gerrit-review.googlesource.com/new-password
Push your patches over HTTPS to the review server, possibly through
a remembered remote to make this easier in the future:
git config remote.review.url https://gerrit-review.googlesource.com/git-repo
git config remote.review.push HEAD:refs/for/maint
git push review
You will be automatically emailed a copy of your commits, and any
comments made by the project maintainers.

View File

@ -1 +0,0 @@
__version__ = 'v1.0-112-gbcd4db5a'

View File

@ -1,32 +0,0 @@
#!/usr/bin/python2.4
# Generated by the protocol buffer compiler. DO NOT EDIT!
from froofle.protobuf import descriptor
from froofle.protobuf import message
from froofle.protobuf import reflection
from froofle.protobuf import service
from froofle.protobuf import service_reflection
from froofle.protobuf import descriptor_pb2
_RETRYREQUESTLATERRESPONSE = descriptor.Descriptor(
name='RetryRequestLaterResponse',
full_name='codereview.RetryRequestLaterResponse',
filename='need_retry.proto',
containing_type=None,
fields=[
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
class RetryRequestLaterResponse(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _RETRYREQUESTLATERRESPONSE

View File

@ -1,380 +0,0 @@
# Copyright 2007, 2008 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import cookielib
import getpass
import logging
import md5
import os
import random
import socket
import sys
import time
import urllib
import urllib2
import urlparse
from froofle.protobuf.service import RpcChannel
from froofle.protobuf.service import RpcController
from need_retry_pb2 import RetryRequestLaterResponse;
_cookie_jars = {}
def _open_jar(path):
auth = False
if path is None:
c = cookielib.CookieJar()
else:
c = _cookie_jars.get(path)
if c is None:
c = cookielib.MozillaCookieJar(path)
if os.path.exists(path):
try:
c.load()
auth = True
except (cookielib.LoadError, IOError):
pass
if auth:
print >>sys.stderr, \
'Loaded authentication cookies from %s' \
% path
else:
os.close(os.open(path, os.O_CREAT, 0600))
os.chmod(path, 0600)
_cookie_jars[path] = c
else:
auth = True
return c, auth
class ClientLoginError(urllib2.HTTPError):
"""Raised to indicate an error authenticating with ClientLogin."""
def __init__(self, url, code, msg, headers, args):
urllib2.HTTPError.__init__(self, url, code, msg, headers, None)
self.args = args
self.reason = args["Error"]
class Proxy(object):
class _ResultHolder(object):
def __call__(self, result):
self._result = result
class _RemoteController(RpcController):
def Reset(self):
pass
def Failed(self):
pass
def ErrorText(self):
pass
def StartCancel(self):
pass
def SetFailed(self, reason):
raise RuntimeError, reason
def IsCancelled(self):
pass
def NotifyOnCancel(self, callback):
pass
def __init__(self, stub):
self._stub = stub
def __getattr__(self, key):
method = getattr(self._stub, key)
def call(request):
done = self._ResultHolder()
method(self._RemoteController(), request, done)
return done._result
return call
class HttpRpc(RpcChannel):
"""Simple protobuf over HTTP POST implementation."""
def __init__(self, host, auth_function,
host_override=None,
extra_headers={},
cookie_file=None):
"""Creates a new HttpRpc.
Args:
host: The host to send requests to.
auth_function: A function that takes no arguments and returns an
(email, password) tuple when called. Will be called if authentication
is required.
host_override: The host header to send to the server (defaults to host).
extra_headers: A dict of extra headers to append to every request.
cookie_file: If not None, name of the file in ~/ to save the
cookie jar into. Applications are encouraged to set this to
'.$appname_cookies' or some otherwise unique name.
"""
self.host = host.lower()
self.host_override = host_override
self.auth_function = auth_function
self.authenticated = False
self.extra_headers = extra_headers
self.xsrf_token = None
if cookie_file is None:
self.cookie_file = None
else:
self.cookie_file = os.path.expanduser("~/%s" % cookie_file)
self.opener = self._GetOpener()
if self.host_override:
logging.info("Server: %s; Host: %s", self.host, self.host_override)
else:
logging.info("Server: %s", self.host)
def CallMethod(self, method, controller, request, response_type, done):
pat = "application/x-google-protobuf; name=%s"
url = "/proto/%s/%s" % (method.containing_service.name, method.name)
reqbin = request.SerializeToString()
reqtyp = pat % request.DESCRIPTOR.full_name
reqmd5 = base64.b64encode(md5.new(reqbin).digest())
start = time.time()
while True:
t, b = self._Send(url, reqbin, reqtyp, reqmd5)
if t == (pat % RetryRequestLaterResponse.DESCRIPTOR.full_name):
if time.time() >= (start + 1800):
controller.SetFailed("timeout")
return
s = random.uniform(0.250, 2.000)
print "Busy, retrying in %.3f seconds ..." % s
time.sleep(s)
continue
if t == (pat % response_type.DESCRIPTOR.full_name):
response = response_type()
response.ParseFromString(b)
done(response)
else:
controller.SetFailed("Unexpected %s response" % t)
break
def _CreateRequest(self, url, data=None):
"""Creates a new urllib request."""
logging.debug("Creating request for: '%s' with payload:\n%s", url, data)
req = urllib2.Request(url, data=data)
if self.host_override:
req.add_header("Host", self.host_override)
for key, value in self.extra_headers.iteritems():
req.add_header(key, value)
return req
def _GetAuthToken(self, email, password):
"""Uses ClientLogin to authenticate the user, returning an auth token.
Args:
email: The user's email address
password: The user's password
Raises:
ClientLoginError: If there was an error authenticating with ClientLogin.
HTTPError: If there was some other form of HTTP error.
Returns:
The authentication token returned by ClientLogin.
"""
account_type = 'GOOGLE'
if self.host.endswith('.google.com'):
account_type = 'HOSTED'
req = self._CreateRequest(
url="https://www.google.com/accounts/ClientLogin",
data=urllib.urlencode({
"Email": email,
"Passwd": password,
"service": "ah",
"source": "gerrit-codereview-client",
"accountType": account_type,
})
)
try:
response = self.opener.open(req)
response_body = response.read()
response_dict = dict(x.split("=")
for x in response_body.split("\n") if x)
return response_dict["Auth"]
except urllib2.HTTPError, e:
if e.code == 403:
body = e.read()
response_dict = dict(x.split("=", 1) for x in body.split("\n") if x)
raise ClientLoginError(req.get_full_url(), e.code, e.msg,
e.headers, response_dict)
else:
raise
def _GetAuthCookie(self, auth_token):
"""Fetches authentication cookies for an authentication token.
Args:
auth_token: The authentication token returned by ClientLogin.
Raises:
HTTPError: If there was an error fetching the authentication cookies.
"""
# This is a dummy value to allow us to identify when we're successful.
continue_location = "http://localhost/"
args = {"continue": continue_location, "auth": auth_token}
req = self._CreateRequest("http://%s/_ah/login?%s" %
(self.host, urllib.urlencode(args)))
try:
response = self.opener.open(req)
except urllib2.HTTPError, e:
response = e
if (response.code != 302 or
response.info()["location"] != continue_location):
raise urllib2.HTTPError(req.get_full_url(), response.code, response.msg,
response.headers, response.fp)
def _GetXsrfToken(self):
"""Fetches /proto/_token for use in X-XSRF-Token HTTP header.
Raises:
HTTPError: If there was an error fetching a new token.
"""
tries = 0
while True:
url = "http://%s/proto/_token" % self.host
req = self._CreateRequest(url)
try:
response = self.opener.open(req)
self.xsrf_token = response.read()
return
except urllib2.HTTPError, e:
if tries > 3:
raise
elif e.code == 401:
self._Authenticate()
else:
raise
def _Authenticate(self):
"""Authenticates the user.
The authentication process works as follows:
1) We get a username and password from the user
2) We use ClientLogin to obtain an AUTH token for the user
(see http://code.google.com/apis/accounts/AuthForInstalledApps.html).
3) We pass the auth token to /_ah/login on the server to obtain an
authentication cookie. If login was successful, it tries to redirect
us to the URL we provided.
If we attempt to access the upload API without first obtaining an
authentication cookie, it returns a 401 response and directs us to
authenticate ourselves with ClientLogin.
"""
attempts = 0
while True:
attempts += 1
try:
cred = self.auth_function()
auth_token = self._GetAuthToken(cred[0], cred[1])
except ClientLoginError:
if attempts < 3:
continue
raise
self._GetAuthCookie(auth_token)
self.authenticated = True
if self.cookie_file is not None:
print >>sys.stderr, \
'Saving authentication cookies to %s' \
% self.cookie_file
self.cookie_jar.save()
return
def _Send(self, request_path, payload, content_type, content_md5):
"""Sends an RPC and returns the response.
Args:
request_path: The path to send the request to, eg /api/appversion/create.
payload: The body of the request, or None to send an empty request.
content_type: The Content-Type header to use.
content_md5: The Content-MD5 header to use.
Returns:
The content type, as a string.
The response body, as a string.
"""
if not self.authenticated:
self._Authenticate()
if not self.xsrf_token:
self._GetXsrfToken()
old_timeout = socket.getdefaulttimeout()
socket.setdefaulttimeout(None)
try:
tries = 0
while True:
tries += 1
url = "http://%s%s" % (self.host, request_path)
req = self._CreateRequest(url=url, data=payload)
req.add_header("Content-Type", content_type)
req.add_header("Content-MD5", content_md5)
req.add_header("X-XSRF-Token", self.xsrf_token)
try:
f = self.opener.open(req)
hdr = f.info()
type = hdr.getheader('Content-Type',
'application/octet-stream')
response = f.read()
f.close()
return type, response
except urllib2.HTTPError, e:
if tries > 3:
raise
elif e.code == 401:
self._Authenticate()
elif e.code == 403:
if not hasattr(e, 'read'):
e.read = lambda self: ''
raise RuntimeError, '403\nxsrf: %s\n%s' \
% (self.xsrf_token, e.read())
else:
raise
finally:
socket.setdefaulttimeout(old_timeout)
def _GetOpener(self):
"""Returns an OpenerDirector that supports cookies and ignores redirects.
Returns:
A urllib2.OpenerDirector object.
"""
opener = urllib2.OpenerDirector()
opener.add_handler(urllib2.ProxyHandler())
opener.add_handler(urllib2.UnknownHandler())
opener.add_handler(urllib2.HTTPHandler())
opener.add_handler(urllib2.HTTPDefaultErrorHandler())
opener.add_handler(urllib2.HTTPSHandler())
opener.add_handler(urllib2.HTTPErrorProcessor())
self.cookie_jar, \
self.authenticated = _open_jar(self.cookie_file)
opener.add_handler(urllib2.HTTPCookieProcessor(self.cookie_jar))
return opener

View File

@ -1,48 +0,0 @@
#!/usr/bin/python2.4
# Generated by the protocol buffer compiler. DO NOT EDIT!
from froofle.protobuf import descriptor
from froofle.protobuf import message
from froofle.protobuf import reflection
from froofle.protobuf import service
from froofle.protobuf import service_reflection
from froofle.protobuf import descriptor_pb2
import upload_bundle_pb2
_REVIEWSERVICE = descriptor.ServiceDescriptor(
name='ReviewService',
full_name='codereview.ReviewService',
index=0,
options=None,
methods=[
descriptor.MethodDescriptor(
name='UploadBundle',
full_name='codereview.ReviewService.UploadBundle',
index=0,
containing_service=None,
input_type=upload_bundle_pb2._UPLOADBUNDLEREQUEST,
output_type=upload_bundle_pb2._UPLOADBUNDLERESPONSE,
options=None,
),
descriptor.MethodDescriptor(
name='ContinueBundle',
full_name='codereview.ReviewService.ContinueBundle',
index=1,
containing_service=None,
input_type=upload_bundle_pb2._UPLOADBUNDLECONTINUE,
output_type=upload_bundle_pb2._UPLOADBUNDLERESPONSE,
options=None,
),
])
class ReviewService(service.Service):
__metaclass__ = service_reflection.GeneratedServiceType
DESCRIPTOR = _REVIEWSERVICE
class ReviewService_Stub(ReviewService):
__metaclass__ = service_reflection.GeneratedServiceStubType
DESCRIPTOR = _REVIEWSERVICE

View File

@ -1,271 +0,0 @@
#!/usr/bin/python2.4
# Generated by the protocol buffer compiler. DO NOT EDIT!
from froofle.protobuf import descriptor
from froofle.protobuf import message
from froofle.protobuf import reflection
from froofle.protobuf import service
from froofle.protobuf import service_reflection
from froofle.protobuf import descriptor_pb2
_UPLOADBUNDLERESPONSE_CODETYPE = descriptor.EnumDescriptor(
name='CodeType',
full_name='codereview.UploadBundleResponse.CodeType',
filename='CodeType',
values=[
descriptor.EnumValueDescriptor(
name='RECEIVED', index=0, number=1,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='CONTINUE', index=1, number=4,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='UNAUTHORIZED_USER', index=2, number=7,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='UNKNOWN_CHANGE', index=3, number=9,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='CHANGE_CLOSED', index=4, number=10,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='UNKNOWN_EMAIL', index=5, number=11,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='UNKNOWN_PROJECT', index=6, number=2,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='UNKNOWN_BRANCH', index=7, number=3,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='UNKNOWN_BUNDLE', index=8, number=5,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='NOT_BUNDLE_OWNER', index=9, number=6,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='BUNDLE_CLOSED', index=10, number=8,
options=None,
type=None),
],
options=None,
)
_REPLACEPATCHSET = descriptor.Descriptor(
name='ReplacePatchSet',
full_name='codereview.ReplacePatchSet',
filename='upload_bundle.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='change_id', full_name='codereview.ReplacePatchSet.change_id', index=0,
number=1, type=9, cpp_type=9, label=2,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='object_id', full_name='codereview.ReplacePatchSet.object_id', index=1,
number=2, type=9, cpp_type=9, label=2,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_UPLOADBUNDLEREQUEST = descriptor.Descriptor(
name='UploadBundleRequest',
full_name='codereview.UploadBundleRequest',
filename='upload_bundle.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='dest_project', full_name='codereview.UploadBundleRequest.dest_project', index=0,
number=10, type=9, cpp_type=9, label=2,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='dest_branch', full_name='codereview.UploadBundleRequest.dest_branch', index=1,
number=11, type=9, cpp_type=9, label=2,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='partial_upload', full_name='codereview.UploadBundleRequest.partial_upload', index=2,
number=12, type=8, cpp_type=7, label=2,
default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='bundle_data', full_name='codereview.UploadBundleRequest.bundle_data', index=3,
number=13, type=12, cpp_type=9, label=2,
default_value="",
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='contained_object', full_name='codereview.UploadBundleRequest.contained_object', index=4,
number=1, type=9, cpp_type=9, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='replace', full_name='codereview.UploadBundleRequest.replace', index=5,
number=2, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='reviewers', full_name='codereview.UploadBundleRequest.reviewers', index=6,
number=3, type=9, cpp_type=9, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='cc', full_name='codereview.UploadBundleRequest.cc', index=7,
number=4, type=9, cpp_type=9, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_UPLOADBUNDLERESPONSE = descriptor.Descriptor(
name='UploadBundleResponse',
full_name='codereview.UploadBundleResponse',
filename='upload_bundle.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='status_code', full_name='codereview.UploadBundleResponse.status_code', index=0,
number=10, type=14, cpp_type=8, label=2,
default_value=1,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='bundle_id', full_name='codereview.UploadBundleResponse.bundle_id', index=1,
number=11, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='invalid_reviewers', full_name='codereview.UploadBundleResponse.invalid_reviewers', index=2,
number=12, type=9, cpp_type=9, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='invalid_cc', full_name='codereview.UploadBundleResponse.invalid_cc', index=3,
number=13, type=9, cpp_type=9, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
_UPLOADBUNDLERESPONSE_CODETYPE,
],
options=None)
_UPLOADBUNDLECONTINUE = descriptor.Descriptor(
name='UploadBundleContinue',
full_name='codereview.UploadBundleContinue',
filename='upload_bundle.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='bundle_id', full_name='codereview.UploadBundleContinue.bundle_id', index=0,
number=10, type=9, cpp_type=9, label=2,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='segment_id', full_name='codereview.UploadBundleContinue.segment_id', index=1,
number=11, type=5, cpp_type=1, label=2,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='partial_upload', full_name='codereview.UploadBundleContinue.partial_upload', index=2,
number=12, type=8, cpp_type=7, label=2,
default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='bundle_data', full_name='codereview.UploadBundleContinue.bundle_data', index=3,
number=13, type=12, cpp_type=9, label=1,
default_value="",
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_UPLOADBUNDLEREQUEST.fields_by_name['replace'].message_type = _REPLACEPATCHSET
_UPLOADBUNDLERESPONSE.fields_by_name['status_code'].enum_type = _UPLOADBUNDLERESPONSE_CODETYPE
class ReplacePatchSet(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _REPLACEPATCHSET
class UploadBundleRequest(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _UPLOADBUNDLEREQUEST
class UploadBundleResponse(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _UPLOADBUNDLERESPONSE
class UploadBundleContinue(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _UPLOADBUNDLECONTINUE

View File

@ -100,6 +100,9 @@ class Coloring(object):
else:
self._on = False
def redirect(self, out):
self._out = out
@property
def is_on(self):
return self._on
@ -107,6 +110,9 @@ class Coloring(object):
def write(self, fmt, *args):
self._out.write(fmt % args)
def flush(self):
self._out.flush()
def nl(self):
self._out.write('\n')
@ -137,7 +143,7 @@ class Coloring(object):
if v is None:
return _Color(fg, bg, attr)
v = v.trim().lowercase()
v = v.strip().lower()
if v == "reset":
return RESET
elif v == '':

View File

@ -15,9 +15,12 @@
import os
import optparse
import platform
import re
import sys
from error import NoSuchProjectError
from error import InvalidProjectGroupsError
class Command(object):
"""Base class for any command line action in repo.
@ -27,6 +30,9 @@ class Command(object):
manifest = None
_optparse = None
def WantPager(self, opt):
return False
@property
def OptionParser(self):
if self._optparse is None:
@ -53,16 +59,24 @@ class Command(object):
"""Perform the action, after option parsing is complete.
"""
raise NotImplementedError
def GetProjects(self, args, missing_ok=False):
"""A list of projects that match the arguments.
"""
all = self.manifest.projects
result = []
mp = self.manifest.manifestProject
groups = mp.config.GetString('manifest.groups')
if not groups:
groups = 'default,platform-' + platform.system().lower()
groups = [x for x in re.split('[,\s]+', groups) if x]
if not args:
for project in all.values():
if missing_ok or project.Exists:
if ((missing_ok or project.Exists) and
project.MatchesGroups(groups)):
result.append(project)
else:
by_path = None
@ -71,7 +85,7 @@ class Command(object):
project = all.get(arg)
if not project:
path = os.path.abspath(arg)
path = os.path.abspath(arg).replace('\\', '/')
if not by_path:
by_path = dict()
@ -79,13 +93,15 @@ class Command(object):
by_path[p.worktree] = p
if os.path.exists(path):
oldpath = None
while path \
and path != '/' \
and path != oldpath \
and path != self.manifest.topdir:
try:
project = by_path[path]
break
except KeyError:
oldpath = path
path = os.path.dirname(path)
else:
try:
@ -97,6 +113,8 @@ class Command(object):
raise NoSuchProjectError(arg)
if not missing_ok and not project.Exists:
raise NoSuchProjectError(arg)
if not project.MatchesGroups(groups):
raise InvalidProjectGroupsError(arg)
result.append(project)
@ -109,8 +127,17 @@ class InteractiveCommand(Command):
"""Command which requires user interaction on the tty and
must not run within a pager, even if the user asks to.
"""
def WantPager(self, opt):
return False
class PagedCommand(Command):
"""Command which defaults to output in a pager, as its
display tends to be larger than one screen full.
"""
def WantPager(self, opt):
return True
class MirrorSafeCommand(object):
"""Command permits itself to run within a mirror,
and does not require a working directory.
"""

View File

@ -19,39 +19,54 @@ XML File Format
A manifest XML file (e.g. 'default.xml') roughly conforms to the
following DTD:
<!DOCTYPE manifest [
<!ELEMENT manifest (remote*,
default?,
remove-project*,
project*,
add-remote*)>
<!DOCTYPE manifest [
<!ELEMENT manifest (notice?,
remote*,
default?,
manifest-server?,
remove-project*,
project*,
repo-hooks?)>
<!ELEMENT notice (#PCDATA)>
<!ELEMENT remote (EMPTY)>
<!ATTLIST remote name ID #REQUIRED>
<!ATTLIST remote fetch CDATA #REQUIRED>
<!ATTLIST remote review CDATA #IMPLIED>
<!ELEMENT default (EMPTY)>
<!ATTLIST default remote IDREF #IMPLIED>
<!ATTLIST default revision CDATA #IMPLIED>
<!ATTLIST default sync-j CDATA #IMPLIED>
<!ATTLIST default sync-c CDATA #IMPLIED>
<!ELEMENT remote (EMPTY)>
<!ATTLIST remote name ID #REQUIRED>
<!ATTLIST remote fetch CDATA #REQUIRED>
<!ATTLIST remote review CDATA #IMPLIED>
<!ATTLIST remote project-name CDATA #IMPLIED>
<!ELEMENT manifest-server (EMPTY)>
<!ATTLIST url CDATA #REQUIRED>
<!ELEMENT project (annotation?)>
<!ATTLIST project name CDATA #REQUIRED>
<!ATTLIST project path CDATA #IMPLIED>
<!ATTLIST project remote IDREF #IMPLIED>
<!ATTLIST project revision CDATA #IMPLIED>
<!ATTLIST project groups CDATA #IMPLIED>
<!ATTLIST project sync-c CDATA #IMPLIED>
<!ELEMENT default (EMPTY)>
<!ATTLIST default remote IDREF #IMPLIED>
<!ATTLIST default revision CDATA #IMPLIED>
<!ELEMENT annotation (EMPTY)>
<!ATTLIST annotation name CDATA #REQUIRED>
<!ATTLIST annotation value CDATA #REQUIRED>
<!ATTLIST annotation keep CDATA "true">
<!ELEMENT remove-project (EMPTY)>
<!ATTLIST remove-project name CDATA #REQUIRED>
<!ELEMENT project (remote*)>
<!ATTLIST project name CDATA #REQUIRED>
<!ATTLIST project path CDATA #IMPLIED>
<!ATTLIST project remote IDREF #IMPLIED>
<!ATTLIST project revision CDATA #IMPLIED>
<!ELEMENT repo-hooks (EMPTY)>
<!ATTLIST repo-hooks in-project CDATA #REQUIRED>
<!ATTLIST repo-hooks enabled-list CDATA #REQUIRED>
<!ELEMENT add-remote (EMPTY)>
<!ATTLIST add-remote to-project ID #REQUIRED>
<!ATTLIST add-remote name ID #REQUIRED>
<!ATTLIST add-remote fetch CDATA #REQUIRED>
<!ATTLIST add-remote review CDATA #IMPLIED>
<!ATTLIST add-remote project-name CDATA #IMPLIED>
<!ELEMENT remove-project (EMPTY)>
<!ATTLIST remove-project name CDATA #REQUIRED>
]>
<!ELEMENT include (EMPTY)>
<!ATTLIST include name CDATA #REQUIRED>
]>
A description of the elements and their attributes follows.
@ -82,25 +97,6 @@ Attribute `review`: Hostname of the Gerrit server where reviews
are uploaded to by `repo upload`. This attribute is optional;
if not specified then `repo upload` will not function.
Attribute `project-name`: Specifies the name of this project used
by the review server given in the review attribute of this element.
Only permitted when the remote element is nested inside of a project
element (see below). If not given, defaults to the name supplied
in the project's name attribute.
Element add-remote
------------------
Adds a remote to an existing project, whose name is given by the
to-project attribute. This is functionally equivalent to nesting
a remote element under the project, but has the advantage that it
can be specified in the uesr's `local_manifest.xml` to add a remote
to a project declared by the normal manifest.
The element can be used to add a fork of an existing project that
the user needs to work with.
Element default
---------------
@ -117,6 +113,27 @@ Attribute `revision`: Name of a Git branch (e.g. `master` or
revision attribute will use this revision.
Element manifest-server
-----------------------
At most one manifest-server may be specified. The url attribute
is used to specify the URL of a manifest server, which is an
XML RPC service that will return a manifest in which each project
is pegged to a known good revision for the current branch and
target.
The manifest server should implement:
GetApprovedManifest(branch, target)
The target to use is defined by environment variables TARGET_PRODUCT
and TARGET_BUILD_VARIANT. These variables are used to create a string
of the form $TARGET_PRODUCT-$TARGET_BUILD_VARIANT, e.g. passion-userdebug.
If one of those variables or both are not present, the program will call
GetApprovedManifest without the target paramater and the manifest server
should choose a reasonable default target.
Element project
---------------
@ -152,12 +169,20 @@ Tags and/or explicit SHA-1s should work in theory, but have not
been extensively tested. If not supplied the revision given by
the default element is used.
Child element `remote`: Described like the top-level remote element,
but adds an additional remote to only this project. These additional
remotes are fetched from first on the initial `repo sync`, causing
the majority of the project's object database to be obtained through
these additional remotes.
Attribute `groups`: List of groups to which this project belongs,
whitespace or comma separated. All projects belong to the group
"default".
Element annotation
------------------
Zero or more annotation elements may be specified as children of a
project element. Each element describes a name-value pair that will be
exported into each project's environment during a 'forall' command,
prefixed with REPO__. In addition, there is an optional attribute
"keep" which accepts the case insensitive values "true" (default) or
"false". This attribute determines whether or not the annotation will
be kept when exported with the manifest subcommand.
Element remove-project
----------------------
@ -170,6 +195,16 @@ This element is mostly useful in the local_manifest.xml, where
the user can remove a project, and possibly replace it with their
own definition.
Element include
---------------
This element provides the capability of including another manifest
file into the originating manifest. Normal rules apply for the
target manifest to include- it must be a usable manifest on it's own.
Attribute `name`; the manifest to include, specified relative to
the manifest repositories root.
Local Manifest
==============
@ -179,22 +214,15 @@ manifest, stored in `$TOP_DIR/.repo/local_manifest.xml`.
For example:
----
$ cat .repo/local_manifest.xml
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<project path="manifest"
name="tools/manifest" />
<project path="platform-manifest"
name="platform/manifest" />
</manifest>
----
$ cat .repo/local_manifest.xml
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<project path="manifest"
name="tools/manifest" />
<project path="platform-manifest"
name="platform/manifest" />
</manifest>
Users may add projects to the local manifest prior to a `repo sync`
invocation, instructing repo to automatically download and manage
these extra projects.
Currently the only supported feature of a local manifest is to
add new remotes and/or projects. In the future a local manifest
may support picking different revisions of a project, or deleting
projects specified in the default manifest.

View File

@ -14,6 +14,7 @@
# limitations under the License.
import os
import re
import sys
import subprocess
import tempfile
@ -38,9 +39,10 @@ class Editor(object):
if e:
return e
e = cls.globalConfig.GetString('core.editor')
if e:
return e
if cls.globalConfig:
e = cls.globalConfig.GetString('core.editor')
if e:
return e
e = os.getenv('VISUAL')
if e:
@ -69,16 +71,38 @@ least one of these before using this command."""
Returns:
new value of edited text; None if editing did not succeed
"""
editor = cls._GetEditor().split()
editor = cls._GetEditor()
if editor == ':':
return data
fd, path = tempfile.mkstemp()
try:
os.write(fd, data)
os.close(fd)
fd = None
if subprocess.Popen(editor + [path]).wait() != 0:
raise EditorError()
return open(path).read()
if re.compile("^.*[$ \t'].*$").match(editor):
args = [editor + ' "$@"', 'sh']
shell = True
else:
args = [editor]
shell = False
args.append(path)
try:
rc = subprocess.Popen(args, shell=shell).wait()
except OSError, e:
raise EditorError('editor failed, %s: %s %s'
% (str(e), editor, path))
if rc != 0:
raise EditorError('editor failed with exit status %d: %s %s'
% (rc, editor, path))
fd2 = open(path)
try:
return fd2.read()
finally:
fd2.close()
finally:
if fd:
os.close(fd)

View File

@ -17,9 +17,18 @@ class ManifestParseError(Exception):
"""Failed to parse the manifest file.
"""
class ManifestInvalidRevisionError(Exception):
"""The revision value in a project is incorrect.
"""
class EditorError(Exception):
"""Unspecified error from the user's text editor.
"""
def __init__(self, reason):
self.reason = reason
def __str__(self):
return self.reason
class GitError(Exception):
"""Unspecified internal error from git.
@ -48,6 +57,15 @@ class UploadError(Exception):
def __str__(self):
return self.reason
class DownloadError(Exception):
"""Cannot download a repository.
"""
def __init__(self, reason):
self.reason = reason
def __str__(self):
return self.reason
class NoSuchProjectError(Exception):
"""A specified project does not exist in the work tree.
"""
@ -59,6 +77,18 @@ class NoSuchProjectError(Exception):
return 'in current directory'
return self.name
class InvalidProjectGroupsError(Exception):
"""A specified project is not suitable for the specified groups
"""
def __init__(self, name=None):
self.name = name
def __str__(self):
if self.Name is None:
return 'in current directory'
return self.name
class RepoChangedException(Exception):
"""Thrown if 'repo sync' results in repo updating its internal
repo or manifest repositories. In this special case we must
@ -66,3 +96,10 @@ class RepoChangedException(Exception):
"""
def __init__(self, extra_args=[]):
self.extra_args = extra_args
class HookError(Exception):
"""Thrown if a 'repo-hook' could not be run.
The common case is that the file wasn't present when we tried to run it.
"""
pass

View File

View File

@ -1,433 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# TODO(robinson): We probably need to provide deep-copy methods for
# descriptor types. When a FieldDescriptor is passed into
# Descriptor.__init__(), we should make a deep copy and then set
# containing_type on it. Alternatively, we could just get
# rid of containing_type (iit's not needed for reflection.py, at least).
#
# TODO(robinson): Print method?
#
# TODO(robinson): Useful __repr__?
"""Descriptors essentially contain exactly the information found in a .proto
file, in types that make this information accessible in Python.
"""
__author__ = 'robinson@google.com (Will Robinson)'
class DescriptorBase(object):
"""Descriptors base class.
This class is the base of all descriptor classes. It provides common options
related functionaility.
"""
def __init__(self, options, options_class_name):
"""Initialize the descriptor given its options message and the name of the
class of the options message. The name of the class is required in case
the options message is None and has to be created.
"""
self._options = options
self._options_class_name = options_class_name
def GetOptions(self):
"""Retrieves descriptor options.
This method returns the options set or creates the default options for the
descriptor.
"""
if self._options:
return self._options
from froofle.protobuf import descriptor_pb2
try:
options_class = getattr(descriptor_pb2, self._options_class_name)
except AttributeError:
raise RuntimeError('Unknown options class name %s!' %
(self._options_class_name))
self._options = options_class()
return self._options
class Descriptor(DescriptorBase):
"""Descriptor for a protocol message type.
A Descriptor instance has the following attributes:
name: (str) Name of this protocol message type.
full_name: (str) Fully-qualified name of this protocol message type,
which will include protocol "package" name and the name of any
enclosing types.
filename: (str) Name of the .proto file containing this message.
containing_type: (Descriptor) Reference to the descriptor of the
type containing us, or None if we have no containing type.
fields: (list of FieldDescriptors) Field descriptors for all
fields in this type.
fields_by_number: (dict int -> FieldDescriptor) Same FieldDescriptor
objects as in |fields|, but indexed by "number" attribute in each
FieldDescriptor.
fields_by_name: (dict str -> FieldDescriptor) Same FieldDescriptor
objects as in |fields|, but indexed by "name" attribute in each
FieldDescriptor.
nested_types: (list of Descriptors) Descriptor references
for all protocol message types nested within this one.
nested_types_by_name: (dict str -> Descriptor) Same Descriptor
objects as in |nested_types|, but indexed by "name" attribute
in each Descriptor.
enum_types: (list of EnumDescriptors) EnumDescriptor references
for all enums contained within this type.
enum_types_by_name: (dict str ->EnumDescriptor) Same EnumDescriptor
objects as in |enum_types|, but indexed by "name" attribute
in each EnumDescriptor.
enum_values_by_name: (dict str -> EnumValueDescriptor) Dict mapping
from enum value name to EnumValueDescriptor for that value.
extensions: (list of FieldDescriptor) All extensions defined directly
within this message type (NOT within a nested type).
extensions_by_name: (dict, string -> FieldDescriptor) Same FieldDescriptor
objects as |extensions|, but indexed by "name" attribute of each
FieldDescriptor.
options: (descriptor_pb2.MessageOptions) Protocol message options or None
to use default message options.
"""
def __init__(self, name, full_name, filename, containing_type,
fields, nested_types, enum_types, extensions, options=None):
"""Arguments to __init__() are as described in the description
of Descriptor fields above.
"""
super(Descriptor, self).__init__(options, 'MessageOptions')
self.name = name
self.full_name = full_name
self.filename = filename
self.containing_type = containing_type
# We have fields in addition to fields_by_name and fields_by_number,
# so that:
# 1. Clients can index fields by "order in which they're listed."
# 2. Clients can easily iterate over all fields with the terse
# syntax: for f in descriptor.fields: ...
self.fields = fields
for field in self.fields:
field.containing_type = self
self.fields_by_number = dict((f.number, f) for f in fields)
self.fields_by_name = dict((f.name, f) for f in fields)
self.nested_types = nested_types
self.nested_types_by_name = dict((t.name, t) for t in nested_types)
self.enum_types = enum_types
for enum_type in self.enum_types:
enum_type.containing_type = self
self.enum_types_by_name = dict((t.name, t) for t in enum_types)
self.enum_values_by_name = dict(
(v.name, v) for t in enum_types for v in t.values)
self.extensions = extensions
for extension in self.extensions:
extension.extension_scope = self
self.extensions_by_name = dict((f.name, f) for f in extensions)
# TODO(robinson): We should have aggressive checking here,
# for example:
# * If you specify a repeated field, you should not be allowed
# to specify a default value.
# * [Other examples here as needed].
#
# TODO(robinson): for this and other *Descriptor classes, we
# might also want to lock things down aggressively (e.g.,
# prevent clients from setting the attributes). Having
# stronger invariants here in general will reduce the number
# of runtime checks we must do in reflection.py...
class FieldDescriptor(DescriptorBase):
"""Descriptor for a single field in a .proto file.
A FieldDescriptor instance has the following attriubtes:
name: (str) Name of this field, exactly as it appears in .proto.
full_name: (str) Name of this field, including containing scope. This is
particularly relevant for extensions.
index: (int) Dense, 0-indexed index giving the order that this
field textually appears within its message in the .proto file.
number: (int) Tag number declared for this field in the .proto file.
type: (One of the TYPE_* constants below) Declared type.
cpp_type: (One of the CPPTYPE_* constants below) C++ type used to
represent this field.
label: (One of the LABEL_* constants below) Tells whether this
field is optional, required, or repeated.
default_value: (Varies) Default value of this field. Only
meaningful for non-repeated scalar fields. Repeated fields
should always set this to [], and non-repeated composite
fields should always set this to None.
containing_type: (Descriptor) Descriptor of the protocol message
type that contains this field. Set by the Descriptor constructor
if we're passed into one.
Somewhat confusingly, for extension fields, this is the
descriptor of the EXTENDED message, not the descriptor
of the message containing this field. (See is_extension and
extension_scope below).
message_type: (Descriptor) If a composite field, a descriptor
of the message type contained in this field. Otherwise, this is None.
enum_type: (EnumDescriptor) If this field contains an enum, a
descriptor of that enum. Otherwise, this is None.
is_extension: True iff this describes an extension field.
extension_scope: (Descriptor) Only meaningful if is_extension is True.
Gives the message that immediately contains this extension field.
Will be None iff we're a top-level (file-level) extension field.
options: (descriptor_pb2.FieldOptions) Protocol message field options or
None to use default field options.
"""
# Must be consistent with C++ FieldDescriptor::Type enum in
# descriptor.h.
#
# TODO(robinson): Find a way to eliminate this repetition.
TYPE_DOUBLE = 1
TYPE_FLOAT = 2
TYPE_INT64 = 3
TYPE_UINT64 = 4
TYPE_INT32 = 5
TYPE_FIXED64 = 6
TYPE_FIXED32 = 7
TYPE_BOOL = 8
TYPE_STRING = 9
TYPE_GROUP = 10
TYPE_MESSAGE = 11
TYPE_BYTES = 12
TYPE_UINT32 = 13
TYPE_ENUM = 14
TYPE_SFIXED32 = 15
TYPE_SFIXED64 = 16
TYPE_SINT32 = 17
TYPE_SINT64 = 18
MAX_TYPE = 18
# Must be consistent with C++ FieldDescriptor::CppType enum in
# descriptor.h.
#
# TODO(robinson): Find a way to eliminate this repetition.
CPPTYPE_INT32 = 1
CPPTYPE_INT64 = 2
CPPTYPE_UINT32 = 3
CPPTYPE_UINT64 = 4
CPPTYPE_DOUBLE = 5
CPPTYPE_FLOAT = 6
CPPTYPE_BOOL = 7
CPPTYPE_ENUM = 8
CPPTYPE_STRING = 9
CPPTYPE_MESSAGE = 10
MAX_CPPTYPE = 10
# Must be consistent with C++ FieldDescriptor::Label enum in
# descriptor.h.
#
# TODO(robinson): Find a way to eliminate this repetition.
LABEL_OPTIONAL = 1
LABEL_REQUIRED = 2
LABEL_REPEATED = 3
MAX_LABEL = 3
def __init__(self, name, full_name, index, number, type, cpp_type, label,
default_value, message_type, enum_type, containing_type,
is_extension, extension_scope, options=None):
"""The arguments are as described in the description of FieldDescriptor
attributes above.
Note that containing_type may be None, and may be set later if necessary
(to deal with circular references between message types, for example).
Likewise for extension_scope.
"""
super(FieldDescriptor, self).__init__(options, 'FieldOptions')
self.name = name
self.full_name = full_name
self.index = index
self.number = number
self.type = type
self.cpp_type = cpp_type
self.label = label
self.default_value = default_value
self.containing_type = containing_type
self.message_type = message_type
self.enum_type = enum_type
self.is_extension = is_extension
self.extension_scope = extension_scope
class EnumDescriptor(DescriptorBase):
"""Descriptor for an enum defined in a .proto file.
An EnumDescriptor instance has the following attributes:
name: (str) Name of the enum type.
full_name: (str) Full name of the type, including package name
and any enclosing type(s).
filename: (str) Name of the .proto file in which this appears.
values: (list of EnumValueDescriptors) List of the values
in this enum.
values_by_name: (dict str -> EnumValueDescriptor) Same as |values|,
but indexed by the "name" field of each EnumValueDescriptor.
values_by_number: (dict int -> EnumValueDescriptor) Same as |values|,
but indexed by the "number" field of each EnumValueDescriptor.
containing_type: (Descriptor) Descriptor of the immediate containing
type of this enum, or None if this is an enum defined at the
top level in a .proto file. Set by Descriptor's constructor
if we're passed into one.
options: (descriptor_pb2.EnumOptions) Enum options message or
None to use default enum options.
"""
def __init__(self, name, full_name, filename, values,
containing_type=None, options=None):
"""Arguments are as described in the attribute description above."""
super(EnumDescriptor, self).__init__(options, 'EnumOptions')
self.name = name
self.full_name = full_name
self.filename = filename
self.values = values
for value in self.values:
value.type = self
self.values_by_name = dict((v.name, v) for v in values)
self.values_by_number = dict((v.number, v) for v in values)
self.containing_type = containing_type
class EnumValueDescriptor(DescriptorBase):
"""Descriptor for a single value within an enum.
name: (str) Name of this value.
index: (int) Dense, 0-indexed index giving the order that this
value appears textually within its enum in the .proto file.
number: (int) Actual number assigned to this enum value.
type: (EnumDescriptor) EnumDescriptor to which this value
belongs. Set by EnumDescriptor's constructor if we're
passed into one.
options: (descriptor_pb2.EnumValueOptions) Enum value options message or
None to use default enum value options options.
"""
def __init__(self, name, index, number, type=None, options=None):
"""Arguments are as described in the attribute description above."""
super(EnumValueDescriptor, self).__init__(options, 'EnumValueOptions')
self.name = name
self.index = index
self.number = number
self.type = type
class ServiceDescriptor(DescriptorBase):
"""Descriptor for a service.
name: (str) Name of the service.
full_name: (str) Full name of the service, including package name.
index: (int) 0-indexed index giving the order that this services
definition appears withing the .proto file.
methods: (list of MethodDescriptor) List of methods provided by this
service.
options: (descriptor_pb2.ServiceOptions) Service options message or
None to use default service options.
"""
def __init__(self, name, full_name, index, methods, options=None):
super(ServiceDescriptor, self).__init__(options, 'ServiceOptions')
self.name = name
self.full_name = full_name
self.index = index
self.methods = methods
# Set the containing service for each method in this service.
for method in self.methods:
method.containing_service = self
def FindMethodByName(self, name):
"""Searches for the specified method, and returns its descriptor."""
for method in self.methods:
if name == method.name:
return method
return None
class MethodDescriptor(DescriptorBase):
"""Descriptor for a method in a service.
name: (str) Name of the method within the service.
full_name: (str) Full name of method.
index: (int) 0-indexed index of the method inside the service.
containing_service: (ServiceDescriptor) The service that contains this
method.
input_type: The descriptor of the message that this method accepts.
output_type: The descriptor of the message that this method returns.
options: (descriptor_pb2.MethodOptions) Method options message or
None to use default method options.
"""
def __init__(self, name, full_name, index, containing_service,
input_type, output_type, options=None):
"""The arguments are as described in the description of MethodDescriptor
attributes above.
Note that containing_service may be None, and may be set later if necessary.
"""
super(MethodDescriptor, self).__init__(options, 'MethodOptions')
self.name = name
self.full_name = full_name
self.index = index
self.containing_service = containing_service
self.input_type = input_type
self.output_type = output_type
def _ParseOptions(message, string):
"""Parses serialized options.
This helper function is used to parse serialized options in generated
proto2 files. It must not be used outside proto2.
"""
message.ParseFromString(string)
return message;

View File

@ -1,950 +0,0 @@
#!/usr/bin/python2.4
# Generated by the protocol buffer compiler. DO NOT EDIT!
from froofle.protobuf import descriptor
from froofle.protobuf import message
from froofle.protobuf import reflection
from froofle.protobuf import service
from froofle.protobuf import service_reflection
_FIELDDESCRIPTORPROTO_TYPE = descriptor.EnumDescriptor(
name='Type',
full_name='froofle.protobuf.FieldDescriptorProto.Type',
filename='Type',
values=[
descriptor.EnumValueDescriptor(
name='TYPE_DOUBLE', index=0, number=1,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_FLOAT', index=1, number=2,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_INT64', index=2, number=3,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_UINT64', index=3, number=4,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_INT32', index=4, number=5,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_FIXED64', index=5, number=6,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_FIXED32', index=6, number=7,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_BOOL', index=7, number=8,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_STRING', index=8, number=9,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_GROUP', index=9, number=10,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_MESSAGE', index=10, number=11,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_BYTES', index=11, number=12,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_UINT32', index=12, number=13,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_ENUM', index=13, number=14,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_SFIXED32', index=14, number=15,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_SFIXED64', index=15, number=16,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_SINT32', index=16, number=17,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='TYPE_SINT64', index=17, number=18,
options=None,
type=None),
],
options=None,
)
_FIELDDESCRIPTORPROTO_LABEL = descriptor.EnumDescriptor(
name='Label',
full_name='froofle.protobuf.FieldDescriptorProto.Label',
filename='Label',
values=[
descriptor.EnumValueDescriptor(
name='LABEL_OPTIONAL', index=0, number=1,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='LABEL_REQUIRED', index=1, number=2,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='LABEL_REPEATED', index=2, number=3,
options=None,
type=None),
],
options=None,
)
_FILEOPTIONS_OPTIMIZEMODE = descriptor.EnumDescriptor(
name='OptimizeMode',
full_name='froofle.protobuf.FileOptions.OptimizeMode',
filename='OptimizeMode',
values=[
descriptor.EnumValueDescriptor(
name='SPEED', index=0, number=1,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='CODE_SIZE', index=1, number=2,
options=None,
type=None),
],
options=None,
)
_FIELDOPTIONS_CTYPE = descriptor.EnumDescriptor(
name='CType',
full_name='froofle.protobuf.FieldOptions.CType',
filename='CType',
values=[
descriptor.EnumValueDescriptor(
name='CORD', index=0, number=1,
options=None,
type=None),
descriptor.EnumValueDescriptor(
name='STRING_PIECE', index=1, number=2,
options=None,
type=None),
],
options=None,
)
_FILEDESCRIPTORSET = descriptor.Descriptor(
name='FileDescriptorSet',
full_name='froofle.protobuf.FileDescriptorSet',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='file', full_name='froofle.protobuf.FileDescriptorSet.file', index=0,
number=1, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_FILEDESCRIPTORPROTO = descriptor.Descriptor(
name='FileDescriptorProto',
full_name='froofle.protobuf.FileDescriptorProto',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.FileDescriptorProto.name', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='package', full_name='froofle.protobuf.FileDescriptorProto.package', index=1,
number=2, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='dependency', full_name='froofle.protobuf.FileDescriptorProto.dependency', index=2,
number=3, type=9, cpp_type=9, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='message_type', full_name='froofle.protobuf.FileDescriptorProto.message_type', index=3,
number=4, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='enum_type', full_name='froofle.protobuf.FileDescriptorProto.enum_type', index=4,
number=5, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='service', full_name='froofle.protobuf.FileDescriptorProto.service', index=5,
number=6, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='extension', full_name='froofle.protobuf.FileDescriptorProto.extension', index=6,
number=7, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='options', full_name='froofle.protobuf.FileDescriptorProto.options', index=7,
number=8, type=11, cpp_type=10, label=1,
default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_DESCRIPTORPROTO_EXTENSIONRANGE = descriptor.Descriptor(
name='ExtensionRange',
full_name='froofle.protobuf.DescriptorProto.ExtensionRange',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='start', full_name='froofle.protobuf.DescriptorProto.ExtensionRange.start', index=0,
number=1, type=5, cpp_type=1, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='end', full_name='froofle.protobuf.DescriptorProto.ExtensionRange.end', index=1,
number=2, type=5, cpp_type=1, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_DESCRIPTORPROTO = descriptor.Descriptor(
name='DescriptorProto',
full_name='froofle.protobuf.DescriptorProto',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.DescriptorProto.name', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='field', full_name='froofle.protobuf.DescriptorProto.field', index=1,
number=2, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='extension', full_name='froofle.protobuf.DescriptorProto.extension', index=2,
number=6, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='nested_type', full_name='froofle.protobuf.DescriptorProto.nested_type', index=3,
number=3, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='enum_type', full_name='froofle.protobuf.DescriptorProto.enum_type', index=4,
number=4, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='extension_range', full_name='froofle.protobuf.DescriptorProto.extension_range', index=5,
number=5, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='options', full_name='froofle.protobuf.DescriptorProto.options', index=6,
number=7, type=11, cpp_type=10, label=1,
default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_FIELDDESCRIPTORPROTO = descriptor.Descriptor(
name='FieldDescriptorProto',
full_name='froofle.protobuf.FieldDescriptorProto',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.FieldDescriptorProto.name', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='number', full_name='froofle.protobuf.FieldDescriptorProto.number', index=1,
number=3, type=5, cpp_type=1, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='label', full_name='froofle.protobuf.FieldDescriptorProto.label', index=2,
number=4, type=14, cpp_type=8, label=1,
default_value=1,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='type', full_name='froofle.protobuf.FieldDescriptorProto.type', index=3,
number=5, type=14, cpp_type=8, label=1,
default_value=1,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='type_name', full_name='froofle.protobuf.FieldDescriptorProto.type_name', index=4,
number=6, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='extendee', full_name='froofle.protobuf.FieldDescriptorProto.extendee', index=5,
number=2, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='default_value', full_name='froofle.protobuf.FieldDescriptorProto.default_value', index=6,
number=7, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='options', full_name='froofle.protobuf.FieldDescriptorProto.options', index=7,
number=8, type=11, cpp_type=10, label=1,
default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
_FIELDDESCRIPTORPROTO_TYPE,
_FIELDDESCRIPTORPROTO_LABEL,
],
options=None)
_ENUMDESCRIPTORPROTO = descriptor.Descriptor(
name='EnumDescriptorProto',
full_name='froofle.protobuf.EnumDescriptorProto',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.EnumDescriptorProto.name', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='value', full_name='froofle.protobuf.EnumDescriptorProto.value', index=1,
number=2, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='options', full_name='froofle.protobuf.EnumDescriptorProto.options', index=2,
number=3, type=11, cpp_type=10, label=1,
default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_ENUMVALUEDESCRIPTORPROTO = descriptor.Descriptor(
name='EnumValueDescriptorProto',
full_name='froofle.protobuf.EnumValueDescriptorProto',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.EnumValueDescriptorProto.name', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='number', full_name='froofle.protobuf.EnumValueDescriptorProto.number', index=1,
number=2, type=5, cpp_type=1, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='options', full_name='froofle.protobuf.EnumValueDescriptorProto.options', index=2,
number=3, type=11, cpp_type=10, label=1,
default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_SERVICEDESCRIPTORPROTO = descriptor.Descriptor(
name='ServiceDescriptorProto',
full_name='froofle.protobuf.ServiceDescriptorProto',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.ServiceDescriptorProto.name', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='method', full_name='froofle.protobuf.ServiceDescriptorProto.method', index=1,
number=2, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='options', full_name='froofle.protobuf.ServiceDescriptorProto.options', index=2,
number=3, type=11, cpp_type=10, label=1,
default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_METHODDESCRIPTORPROTO = descriptor.Descriptor(
name='MethodDescriptorProto',
full_name='froofle.protobuf.MethodDescriptorProto',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.MethodDescriptorProto.name', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='input_type', full_name='froofle.protobuf.MethodDescriptorProto.input_type', index=1,
number=2, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='output_type', full_name='froofle.protobuf.MethodDescriptorProto.output_type', index=2,
number=3, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='options', full_name='froofle.protobuf.MethodDescriptorProto.options', index=3,
number=4, type=11, cpp_type=10, label=1,
default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_FILEOPTIONS = descriptor.Descriptor(
name='FileOptions',
full_name='froofle.protobuf.FileOptions',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='java_package', full_name='froofle.protobuf.FileOptions.java_package', index=0,
number=1, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='java_outer_classname', full_name='froofle.protobuf.FileOptions.java_outer_classname', index=1,
number=8, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='java_multiple_files', full_name='froofle.protobuf.FileOptions.java_multiple_files', index=2,
number=10, type=8, cpp_type=7, label=1,
default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='optimize_for', full_name='froofle.protobuf.FileOptions.optimize_for', index=3,
number=9, type=14, cpp_type=8, label=1,
default_value=2,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='uninterpreted_option', full_name='froofle.protobuf.FileOptions.uninterpreted_option', index=4,
number=999, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
_FILEOPTIONS_OPTIMIZEMODE,
],
options=None)
_MESSAGEOPTIONS = descriptor.Descriptor(
name='MessageOptions',
full_name='froofle.protobuf.MessageOptions',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='message_set_wire_format', full_name='froofle.protobuf.MessageOptions.message_set_wire_format', index=0,
number=1, type=8, cpp_type=7, label=1,
default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='uninterpreted_option', full_name='froofle.protobuf.MessageOptions.uninterpreted_option', index=1,
number=999, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_FIELDOPTIONS = descriptor.Descriptor(
name='FieldOptions',
full_name='froofle.protobuf.FieldOptions',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='ctype', full_name='froofle.protobuf.FieldOptions.ctype', index=0,
number=1, type=14, cpp_type=8, label=1,
default_value=1,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='experimental_map_key', full_name='froofle.protobuf.FieldOptions.experimental_map_key', index=1,
number=9, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='uninterpreted_option', full_name='froofle.protobuf.FieldOptions.uninterpreted_option', index=2,
number=999, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
_FIELDOPTIONS_CTYPE,
],
options=None)
_ENUMOPTIONS = descriptor.Descriptor(
name='EnumOptions',
full_name='froofle.protobuf.EnumOptions',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='uninterpreted_option', full_name='froofle.protobuf.EnumOptions.uninterpreted_option', index=0,
number=999, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_ENUMVALUEOPTIONS = descriptor.Descriptor(
name='EnumValueOptions',
full_name='froofle.protobuf.EnumValueOptions',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='uninterpreted_option', full_name='froofle.protobuf.EnumValueOptions.uninterpreted_option', index=0,
number=999, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_SERVICEOPTIONS = descriptor.Descriptor(
name='ServiceOptions',
full_name='froofle.protobuf.ServiceOptions',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='uninterpreted_option', full_name='froofle.protobuf.ServiceOptions.uninterpreted_option', index=0,
number=999, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_METHODOPTIONS = descriptor.Descriptor(
name='MethodOptions',
full_name='froofle.protobuf.MethodOptions',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='uninterpreted_option', full_name='froofle.protobuf.MethodOptions.uninterpreted_option', index=0,
number=999, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_UNINTERPRETEDOPTION_NAMEPART = descriptor.Descriptor(
name='NamePart',
full_name='froofle.protobuf.UninterpretedOption.NamePart',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name_part', full_name='froofle.protobuf.UninterpretedOption.NamePart.name_part', index=0,
number=1, type=9, cpp_type=9, label=2,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='is_extension', full_name='froofle.protobuf.UninterpretedOption.NamePart.is_extension', index=1,
number=2, type=8, cpp_type=7, label=2,
default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_UNINTERPRETEDOPTION = descriptor.Descriptor(
name='UninterpretedOption',
full_name='froofle.protobuf.UninterpretedOption',
filename='froofle/protobuf/descriptor.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='name', full_name='froofle.protobuf.UninterpretedOption.name', index=0,
number=2, type=11, cpp_type=10, label=3,
default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='identifier_value', full_name='froofle.protobuf.UninterpretedOption.identifier_value', index=1,
number=3, type=9, cpp_type=9, label=1,
default_value=unicode("", "utf-8"),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='positive_int_value', full_name='froofle.protobuf.UninterpretedOption.positive_int_value', index=2,
number=4, type=4, cpp_type=4, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='negative_int_value', full_name='froofle.protobuf.UninterpretedOption.negative_int_value', index=3,
number=5, type=3, cpp_type=2, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='double_value', full_name='froofle.protobuf.UninterpretedOption.double_value', index=4,
number=6, type=1, cpp_type=5, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
descriptor.FieldDescriptor(
name='string_value', full_name='froofle.protobuf.UninterpretedOption.string_value', index=5,
number=7, type=12, cpp_type=9, label=1,
default_value="",
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[], # TODO(robinson): Implement.
enum_types=[
],
options=None)
_FILEDESCRIPTORSET.fields_by_name['file'].message_type = _FILEDESCRIPTORPROTO
_FILEDESCRIPTORPROTO.fields_by_name['message_type'].message_type = _DESCRIPTORPROTO
_FILEDESCRIPTORPROTO.fields_by_name['enum_type'].message_type = _ENUMDESCRIPTORPROTO
_FILEDESCRIPTORPROTO.fields_by_name['service'].message_type = _SERVICEDESCRIPTORPROTO
_FILEDESCRIPTORPROTO.fields_by_name['extension'].message_type = _FIELDDESCRIPTORPROTO
_FILEDESCRIPTORPROTO.fields_by_name['options'].message_type = _FILEOPTIONS
_DESCRIPTORPROTO.fields_by_name['field'].message_type = _FIELDDESCRIPTORPROTO
_DESCRIPTORPROTO.fields_by_name['extension'].message_type = _FIELDDESCRIPTORPROTO
_DESCRIPTORPROTO.fields_by_name['nested_type'].message_type = _DESCRIPTORPROTO
_DESCRIPTORPROTO.fields_by_name['enum_type'].message_type = _ENUMDESCRIPTORPROTO
_DESCRIPTORPROTO.fields_by_name['extension_range'].message_type = _DESCRIPTORPROTO_EXTENSIONRANGE
_DESCRIPTORPROTO.fields_by_name['options'].message_type = _MESSAGEOPTIONS
_FIELDDESCRIPTORPROTO.fields_by_name['label'].enum_type = _FIELDDESCRIPTORPROTO_LABEL
_FIELDDESCRIPTORPROTO.fields_by_name['type'].enum_type = _FIELDDESCRIPTORPROTO_TYPE
_FIELDDESCRIPTORPROTO.fields_by_name['options'].message_type = _FIELDOPTIONS
_ENUMDESCRIPTORPROTO.fields_by_name['value'].message_type = _ENUMVALUEDESCRIPTORPROTO
_ENUMDESCRIPTORPROTO.fields_by_name['options'].message_type = _ENUMOPTIONS
_ENUMVALUEDESCRIPTORPROTO.fields_by_name['options'].message_type = _ENUMVALUEOPTIONS
_SERVICEDESCRIPTORPROTO.fields_by_name['method'].message_type = _METHODDESCRIPTORPROTO
_SERVICEDESCRIPTORPROTO.fields_by_name['options'].message_type = _SERVICEOPTIONS
_METHODDESCRIPTORPROTO.fields_by_name['options'].message_type = _METHODOPTIONS
_FILEOPTIONS.fields_by_name['optimize_for'].enum_type = _FILEOPTIONS_OPTIMIZEMODE
_FILEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION
_MESSAGEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION
_FIELDOPTIONS.fields_by_name['ctype'].enum_type = _FIELDOPTIONS_CTYPE
_FIELDOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION
_ENUMOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION
_ENUMVALUEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION
_SERVICEOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION
_METHODOPTIONS.fields_by_name['uninterpreted_option'].message_type = _UNINTERPRETEDOPTION
_UNINTERPRETEDOPTION.fields_by_name['name'].message_type = _UNINTERPRETEDOPTION_NAMEPART
class FileDescriptorSet(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _FILEDESCRIPTORSET
class FileDescriptorProto(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _FILEDESCRIPTORPROTO
class DescriptorProto(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
class ExtensionRange(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _DESCRIPTORPROTO_EXTENSIONRANGE
DESCRIPTOR = _DESCRIPTORPROTO
class FieldDescriptorProto(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _FIELDDESCRIPTORPROTO
class EnumDescriptorProto(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _ENUMDESCRIPTORPROTO
class EnumValueDescriptorProto(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _ENUMVALUEDESCRIPTORPROTO
class ServiceDescriptorProto(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _SERVICEDESCRIPTORPROTO
class MethodDescriptorProto(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _METHODDESCRIPTORPROTO
class FileOptions(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _FILEOPTIONS
class MessageOptions(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _MESSAGEOPTIONS
class FieldOptions(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _FIELDOPTIONS
class EnumOptions(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _ENUMOPTIONS
class EnumValueOptions(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _ENUMVALUEOPTIONS
class ServiceOptions(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _SERVICEOPTIONS
class MethodOptions(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _METHODOPTIONS
class UninterpretedOption(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
class NamePart(message.Message):
__metaclass__ = reflection.GeneratedProtocolMessageType
DESCRIPTOR = _UNINTERPRETEDOPTION_NAMEPART
DESCRIPTOR = _UNINTERPRETEDOPTION

View File

@ -1,209 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Class for decoding protocol buffer primitives.
Contains the logic for decoding every logical protocol field type
from one of the 5 physical wire types.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from froofle.protobuf import message
from froofle.protobuf.internal import input_stream
from froofle.protobuf.internal import wire_format
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by WireFormat from the C++ proto2
# implementation.
class Decoder(object):
"""Decodes logical protocol buffer fields from the wire."""
def __init__(self, s):
"""Initializes the decoder to read from s.
Args:
s: An immutable sequence of bytes, which must be accessible
via the Python buffer() primitive (i.e., buffer(s)).
"""
self._stream = input_stream.InputStream(s)
def EndOfStream(self):
"""Returns true iff we've reached the end of the bytes we're reading."""
return self._stream.EndOfStream()
def Position(self):
"""Returns the 0-indexed position in |s|."""
return self._stream.Position()
def ReadFieldNumberAndWireType(self):
"""Reads a tag from the wire. Returns a (field_number, wire_type) pair."""
tag_and_type = self.ReadUInt32()
return wire_format.UnpackTag(tag_and_type)
def SkipBytes(self, bytes):
"""Skips the specified number of bytes on the wire."""
self._stream.SkipBytes(bytes)
# Note that the Read*() methods below are not exactly symmetrical with the
# corresponding Encoder.Append*() methods. Those Encoder methods first
# encode a tag, but the Read*() methods below assume that the tag has already
# been read, and that the client wishes to read a field of the specified type
# starting at the current position.
def ReadInt32(self):
"""Reads and returns a signed, varint-encoded, 32-bit integer."""
return self._stream.ReadVarint32()
def ReadInt64(self):
"""Reads and returns a signed, varint-encoded, 64-bit integer."""
return self._stream.ReadVarint64()
def ReadUInt32(self):
"""Reads and returns an signed, varint-encoded, 32-bit integer."""
return self._stream.ReadVarUInt32()
def ReadUInt64(self):
"""Reads and returns an signed, varint-encoded,64-bit integer."""
return self._stream.ReadVarUInt64()
def ReadSInt32(self):
"""Reads and returns a signed, zigzag-encoded, varint-encoded,
32-bit integer."""
return wire_format.ZigZagDecode(self._stream.ReadVarUInt32())
def ReadSInt64(self):
"""Reads and returns a signed, zigzag-encoded, varint-encoded,
64-bit integer."""
return wire_format.ZigZagDecode(self._stream.ReadVarUInt64())
def ReadFixed32(self):
"""Reads and returns an unsigned, fixed-width, 32-bit integer."""
return self._stream.ReadLittleEndian32()
def ReadFixed64(self):
"""Reads and returns an unsigned, fixed-width, 64-bit integer."""
return self._stream.ReadLittleEndian64()
def ReadSFixed32(self):
"""Reads and returns a signed, fixed-width, 32-bit integer."""
value = self._stream.ReadLittleEndian32()
if value >= (1 << 31):
value -= (1 << 32)
return value
def ReadSFixed64(self):
"""Reads and returns a signed, fixed-width, 64-bit integer."""
value = self._stream.ReadLittleEndian64()
if value >= (1 << 63):
value -= (1 << 64)
return value
def ReadFloat(self):
"""Reads and returns a 4-byte floating-point number."""
serialized = self._stream.ReadBytes(4)
return struct.unpack('f', serialized)[0]
def ReadDouble(self):
"""Reads and returns an 8-byte floating-point number."""
serialized = self._stream.ReadBytes(8)
return struct.unpack('d', serialized)[0]
def ReadBool(self):
"""Reads and returns a bool."""
i = self._stream.ReadVarUInt32()
return bool(i)
def ReadEnum(self):
"""Reads and returns an enum value."""
return self._stream.ReadVarUInt32()
def ReadString(self):
"""Reads and returns a length-delimited string."""
bytes = self.ReadBytes()
return unicode(bytes, 'utf-8')
def ReadBytes(self):
"""Reads and returns a length-delimited byte sequence."""
length = self._stream.ReadVarUInt32()
return self._stream.ReadBytes(length)
def ReadMessageInto(self, msg):
"""Calls msg.MergeFromString() to merge
length-delimited serialized message data into |msg|.
REQUIRES: The decoder must be positioned at the serialized "length"
prefix to a length-delmiited serialized message.
POSTCONDITION: The decoder is positioned just after the
serialized message, and we have merged those serialized
contents into |msg|.
"""
length = self._stream.ReadVarUInt32()
sub_buffer = self._stream.GetSubBuffer(length)
num_bytes_used = msg.MergeFromString(sub_buffer)
if num_bytes_used != length:
raise message.DecodeError(
'Submessage told to deserialize from %d-byte encoding, '
'but used only %d bytes' % (length, num_bytes_used))
self._stream.SkipBytes(num_bytes_used)
def ReadGroupInto(self, expected_field_number, group):
"""Calls group.MergeFromString() to merge
END_GROUP-delimited serialized message data into |group|.
We'll raise an exception if we don't find an END_GROUP
tag immediately after the serialized message contents.
REQUIRES: The decoder is positioned just after the START_GROUP
tag for this group.
POSTCONDITION: The decoder is positioned just after the
END_GROUP tag for this group, and we have merged
the contents of the group into |group|.
"""
sub_buffer = self._stream.GetSubBuffer() # No a priori length limit.
num_bytes_used = group.MergeFromString(sub_buffer)
if num_bytes_used < 0:
raise message.DecodeError('Group message reported negative bytes read.')
self._stream.SkipBytes(num_bytes_used)
field_number, field_type = self.ReadFieldNumberAndWireType()
if field_type != wire_format.WIRETYPE_END_GROUP:
raise message.DecodeError('Group message did not end with an END_GROUP.')
if field_number != expected_field_number:
raise message.DecodeError('END_GROUP tag had field '
'number %d, was expecting field number %d' % (
field_number, expected_field_number))
# We're now positioned just after the END_GROUP tag. Perfect.

View File

@ -1,206 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Class for encoding protocol message primitives.
Contains the logic for encoding every logical protocol field type
into one of the 5 physical wire types.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from froofle.protobuf import message
from froofle.protobuf.internal import wire_format
from froofle.protobuf.internal import output_stream
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by WireFormat from the C++ proto2
# implementation.
class Encoder(object):
"""Encodes logical protocol buffer fields to the wire format."""
def __init__(self):
self._stream = output_stream.OutputStream()
def ToString(self):
"""Returns all values encoded in this object as a string."""
return self._stream.ToString()
# All the Append*() methods below first append a tag+type pair to the buffer
# before appending the specified value.
def AppendInt32(self, field_number, value):
"""Appends a 32-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarint32(value)
def AppendInt64(self, field_number, value):
"""Appends a 64-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarint64(value)
def AppendUInt32(self, field_number, unsigned_value):
"""Appends an unsigned 32-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarUInt32(unsigned_value)
def AppendUInt64(self, field_number, unsigned_value):
"""Appends an unsigned 64-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarUInt64(unsigned_value)
def AppendSInt32(self, field_number, value):
"""Appends a 32-bit integer to our buffer, zigzag-encoded and then
varint-encoded.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
zigzag_value = wire_format.ZigZagEncode(value)
self._stream.AppendVarUInt32(zigzag_value)
def AppendSInt64(self, field_number, value):
"""Appends a 64-bit integer to our buffer, zigzag-encoded and then
varint-encoded.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
zigzag_value = wire_format.ZigZagEncode(value)
self._stream.AppendVarUInt64(zigzag_value)
def AppendFixed32(self, field_number, unsigned_value):
"""Appends an unsigned 32-bit integer to our buffer, in little-endian
byte-order.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED32)
self._stream.AppendLittleEndian32(unsigned_value)
def AppendFixed64(self, field_number, unsigned_value):
"""Appends an unsigned 64-bit integer to our buffer, in little-endian
byte-order.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED64)
self._stream.AppendLittleEndian64(unsigned_value)
def AppendSFixed32(self, field_number, value):
"""Appends a signed 32-bit integer to our buffer, in little-endian
byte-order.
"""
sign = (value & 0x80000000) and -1 or 0
if value >> 32 != sign:
raise message.EncodeError('SFixed32 out of range: %d' % value)
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED32)
self._stream.AppendLittleEndian32(value & 0xffffffff)
def AppendSFixed64(self, field_number, value):
"""Appends a signed 64-bit integer to our buffer, in little-endian
byte-order.
"""
sign = (value & 0x8000000000000000) and -1 or 0
if value >> 64 != sign:
raise message.EncodeError('SFixed64 out of range: %d' % value)
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED64)
self._stream.AppendLittleEndian64(value & 0xffffffffffffffff)
def AppendFloat(self, field_number, value):
"""Appends a floating-point number to our buffer."""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED32)
self._stream.AppendRawBytes(struct.pack('f', value))
def AppendDouble(self, field_number, value):
"""Appends a double-precision floating-point number to our buffer."""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED64)
self._stream.AppendRawBytes(struct.pack('d', value))
def AppendBool(self, field_number, value):
"""Appends a boolean to our buffer."""
self.AppendInt32(field_number, value)
def AppendEnum(self, field_number, value):
"""Appends an enum value to our buffer."""
self.AppendInt32(field_number, value)
def AppendString(self, field_number, value):
"""Appends a length-prefixed unicode string, encoded as UTF-8 to our buffer,
with the length varint-encoded.
"""
self.AppendBytes(field_number, value.encode('utf-8'))
def AppendBytes(self, field_number, value):
"""Appends a length-prefixed sequence of bytes to our buffer, with the
length varint-encoded.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
self._stream.AppendVarUInt32(len(value))
self._stream.AppendRawBytes(value)
# TODO(robinson): For AppendGroup() and AppendMessage(), we'd really like to
# avoid the extra string copy here. We can do so if we widen the Message
# interface to be able to serialize to a stream in addition to a string. The
# challenge when thinking ahead to the Python/C API implementation of Message
# is finding a stream-like Python thing to which we can write raw bytes
# from C. I'm not sure such a thing exists(?). (array.array is pretty much
# what we want, but it's not directly exposed in the Python/C API).
def AppendGroup(self, field_number, group):
"""Appends a group to our buffer.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_START_GROUP)
self._stream.AppendRawBytes(group.SerializeToString())
self._AppendTag(field_number, wire_format.WIRETYPE_END_GROUP)
def AppendMessage(self, field_number, msg):
"""Appends a nested message to our buffer.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
self._stream.AppendVarUInt32(msg.ByteSize())
self._stream.AppendRawBytes(msg.SerializeToString())
def AppendMessageSetItem(self, field_number, msg):
"""Appends an item using the message set wire format.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
self._AppendTag(1, wire_format.WIRETYPE_START_GROUP)
self.AppendInt32(2, field_number)
self.AppendMessage(3, msg)
self._AppendTag(1, wire_format.WIRETYPE_END_GROUP)
def _AppendTag(self, field_number, wire_type):
"""Appends a tag containing field number and wire type information."""
self._stream.AppendVarUInt32(wire_format.PackTag(field_number, wire_type))

View File

@ -1,326 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""InputStream is the primitive interface for reading bits from the wire.
All protocol buffer deserialization can be expressed in terms of
the InputStream primitives provided here.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from array import array
from froofle.protobuf import message
from froofle.protobuf.internal import wire_format
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by CodedInputStream from the C++
# proto2 implementation.
class InputStreamBuffer(object):
"""Contains all logic for reading bits, and dealing with stream position.
If an InputStream method ever raises an exception, the stream is left
in an indeterminate state and is not safe for further use.
"""
def __init__(self, s):
# What we really want is something like array('B', s), where elements we
# read from the array are already given to us as one-byte integers. BUT
# using array() instead of buffer() would force full string copies to result
# from each GetSubBuffer() call.
#
# So, if the N serialized bytes of a single protocol buffer object are
# split evenly between 2 child messages, and so on recursively, using
# array('B', s) instead of buffer() would incur an additional N*logN bytes
# copied during deserialization.
#
# The higher constant overhead of having to ord() for every byte we read
# from the buffer in _ReadVarintHelper() could definitely lead to worse
# performance in many real-world scenarios, even if the asymptotic
# complexity is better. However, our real answer is that the mythical
# Python/C extension module output mode for the protocol compiler will
# be blazing-fast and will eliminate most use of this class anyway.
self._buffer = buffer(s)
self._pos = 0
def EndOfStream(self):
"""Returns true iff we're at the end of the stream.
If this returns true, then a call to any other InputStream method
will raise an exception.
"""
return self._pos >= len(self._buffer)
def Position(self):
"""Returns the current position in the stream, or equivalently, the
number of bytes read so far.
"""
return self._pos
def GetSubBuffer(self, size=None):
"""Returns a sequence-like object that represents a portion of our
underlying sequence.
Position 0 in the returned object corresponds to self.Position()
in this stream.
If size is specified, then the returned object ends after the
next "size" bytes in this stream. If size is not specified,
then the returned object ends at the end of this stream.
We guarantee that the returned object R supports the Python buffer
interface (and thus that the call buffer(R) will work).
Note that the returned buffer is read-only.
The intended use for this method is for nested-message and nested-group
deserialization, where we want to make a recursive MergeFromString()
call on the portion of the original sequence that contains the serialized
nested message. (And we'd like to do so without making unnecessary string
copies).
REQUIRES: size is nonnegative.
"""
# Note that buffer() doesn't perform any actual string copy.
if size is None:
return buffer(self._buffer, self._pos)
else:
if size < 0:
raise message.DecodeError('Negative size %d' % size)
return buffer(self._buffer, self._pos, size)
def SkipBytes(self, num_bytes):
"""Skip num_bytes bytes ahead, or go to the end of the stream, whichever
comes first.
REQUIRES: num_bytes is nonnegative.
"""
if num_bytes < 0:
raise message.DecodeError('Negative num_bytes %d' % num_bytes)
self._pos += num_bytes
self._pos = min(self._pos, len(self._buffer))
def ReadBytes(self, size):
"""Reads up to 'size' bytes from the stream, stopping early
only if we reach the end of the stream. Returns the bytes read
as a string.
"""
if size < 0:
raise message.DecodeError('Negative size %d' % size)
s = (self._buffer[self._pos : self._pos + size])
self._pos += len(s) # Only advance by the number of bytes actually read.
return s
def ReadLittleEndian32(self):
"""Interprets the next 4 bytes of the stream as a little-endian
encoded, unsiged 32-bit integer, and returns that integer.
"""
try:
i = struct.unpack(wire_format.FORMAT_UINT32_LITTLE_ENDIAN,
self._buffer[self._pos : self._pos + 4])
self._pos += 4
return i[0] # unpack() result is a 1-element tuple.
except struct.error, e:
raise message.DecodeError(e)
def ReadLittleEndian64(self):
"""Interprets the next 8 bytes of the stream as a little-endian
encoded, unsiged 64-bit integer, and returns that integer.
"""
try:
i = struct.unpack(wire_format.FORMAT_UINT64_LITTLE_ENDIAN,
self._buffer[self._pos : self._pos + 8])
self._pos += 8
return i[0] # unpack() result is a 1-element tuple.
except struct.error, e:
raise message.DecodeError(e)
def ReadVarint32(self):
"""Reads a varint from the stream, interprets this varint
as a signed, 32-bit integer, and returns the integer.
"""
i = self.ReadVarint64()
if not wire_format.INT32_MIN <= i <= wire_format.INT32_MAX:
raise message.DecodeError('Value out of range for int32: %d' % i)
return int(i)
def ReadVarUInt32(self):
"""Reads a varint from the stream, interprets this varint
as an unsigned, 32-bit integer, and returns the integer.
"""
i = self.ReadVarUInt64()
if i > wire_format.UINT32_MAX:
raise message.DecodeError('Value out of range for uint32: %d' % i)
return i
def ReadVarint64(self):
"""Reads a varint from the stream, interprets this varint
as a signed, 64-bit integer, and returns the integer.
"""
i = self.ReadVarUInt64()
if i > wire_format.INT64_MAX:
i -= (1 << 64)
return i
def ReadVarUInt64(self):
"""Reads a varint from the stream, interprets this varint
as an unsigned, 64-bit integer, and returns the integer.
"""
i = self._ReadVarintHelper()
if not 0 <= i <= wire_format.UINT64_MAX:
raise message.DecodeError('Value out of range for uint64: %d' % i)
return i
def _ReadVarintHelper(self):
"""Helper for the various varint-reading methods above.
Reads an unsigned, varint-encoded integer from the stream and
returns this integer.
Does no bounds checking except to ensure that we read at most as many bytes
as could possibly be present in a varint-encoded 64-bit number.
"""
result = 0
shift = 0
while 1:
if shift >= 64:
raise message.DecodeError('Too many bytes when decoding varint.')
try:
b = ord(self._buffer[self._pos])
except IndexError:
raise message.DecodeError('Truncated varint.')
self._pos += 1
result |= ((b & 0x7f) << shift)
shift += 7
if not (b & 0x80):
return result
class InputStreamArray(object):
def __init__(self, s):
self._buffer = array('B', s)
self._pos = 0
def EndOfStream(self):
return self._pos >= len(self._buffer)
def Position(self):
return self._pos
def GetSubBuffer(self, size=None):
if size is None:
return self._buffer[self._pos : ].tostring()
else:
if size < 0:
raise message.DecodeError('Negative size %d' % size)
return self._buffer[self._pos : self._pos + size].tostring()
def SkipBytes(self, num_bytes):
if num_bytes < 0:
raise message.DecodeError('Negative num_bytes %d' % num_bytes)
self._pos += num_bytes
self._pos = min(self._pos, len(self._buffer))
def ReadBytes(self, size):
if size < 0:
raise message.DecodeError('Negative size %d' % size)
s = self._buffer[self._pos : self._pos + size].tostring()
self._pos += len(s) # Only advance by the number of bytes actually read.
return s
def ReadLittleEndian32(self):
try:
i = struct.unpack(wire_format.FORMAT_UINT32_LITTLE_ENDIAN,
self._buffer[self._pos : self._pos + 4])
self._pos += 4
return i[0] # unpack() result is a 1-element tuple.
except struct.error, e:
raise message.DecodeError(e)
def ReadLittleEndian64(self):
try:
i = struct.unpack(wire_format.FORMAT_UINT64_LITTLE_ENDIAN,
self._buffer[self._pos : self._pos + 8])
self._pos += 8
return i[0] # unpack() result is a 1-element tuple.
except struct.error, e:
raise message.DecodeError(e)
def ReadVarint32(self):
i = self.ReadVarint64()
if not wire_format.INT32_MIN <= i <= wire_format.INT32_MAX:
raise message.DecodeError('Value out of range for int32: %d' % i)
return int(i)
def ReadVarUInt32(self):
i = self.ReadVarUInt64()
if i > wire_format.UINT32_MAX:
raise message.DecodeError('Value out of range for uint32: %d' % i)
return i
def ReadVarint64(self):
i = self.ReadVarUInt64()
if i > wire_format.INT64_MAX:
i -= (1 << 64)
return i
def ReadVarUInt64(self):
i = self._ReadVarintHelper()
if not 0 <= i <= wire_format.UINT64_MAX:
raise message.DecodeError('Value out of range for uint64: %d' % i)
return i
def _ReadVarintHelper(self):
result = 0
shift = 0
while 1:
if shift >= 64:
raise message.DecodeError('Too many bytes when decoding varint.')
try:
b = self._buffer[self._pos]
except IndexError:
raise message.DecodeError('Truncated varint.')
self._pos += 1
result |= ((b & 0x7f) << shift)
shift += 7
if not (b & 0x80):
return result
try:
buffer("")
InputStream = InputStreamBuffer
except NotImplementedError:
# Google App Engine: dev_appserver.py
InputStream = InputStreamArray
except RuntimeError:
# Google App Engine: production
InputStream = InputStreamArray

View File

@ -1,69 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Defines a listener interface for observing certain
state transitions on Message objects.
Also defines a null implementation of this interface.
"""
__author__ = 'robinson@google.com (Will Robinson)'
class MessageListener(object):
"""Listens for transitions to nonempty and for invalidations of cached
byte sizes. Meant to be registered via Message._SetListener().
"""
def TransitionToNonempty(self):
"""Called the *first* time that this message becomes nonempty.
Implementations are free (but not required) to call this method multiple
times after the message has become nonempty.
"""
raise NotImplementedError
def ByteSizeDirty(self):
"""Called *every* time the cached byte size value
for this object is invalidated (transitions from being
"clean" to "dirty").
"""
raise NotImplementedError
class NullMessageListener(object):
"""No-op MessageListener implementation."""
def TransitionToNonempty(self):
pass
def ByteSizeDirty(self):
pass

View File

@ -1,125 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""OutputStream is the primitive interface for sticking bits on the wire.
All protocol buffer serialization can be expressed in terms of
the OutputStream primitives provided here.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import array
import struct
from froofle.protobuf import message
from froofle.protobuf.internal import wire_format
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by CodedOutputStream from the C++
# proto2 implementation.
class OutputStream(object):
"""Contains all logic for writing bits, and ToString() to get the result."""
def __init__(self):
self._buffer = array.array('B')
def AppendRawBytes(self, raw_bytes):
"""Appends raw_bytes to our internal buffer."""
self._buffer.fromstring(raw_bytes)
def AppendLittleEndian32(self, unsigned_value):
"""Appends an unsigned 32-bit integer to the internal buffer,
in little-endian byte order.
"""
if not 0 <= unsigned_value <= wire_format.UINT32_MAX:
raise message.EncodeError(
'Unsigned 32-bit out of range: %d' % unsigned_value)
self._buffer.fromstring(struct.pack(
wire_format.FORMAT_UINT32_LITTLE_ENDIAN, unsigned_value))
def AppendLittleEndian64(self, unsigned_value):
"""Appends an unsigned 64-bit integer to the internal buffer,
in little-endian byte order.
"""
if not 0 <= unsigned_value <= wire_format.UINT64_MAX:
raise message.EncodeError(
'Unsigned 64-bit out of range: %d' % unsigned_value)
self._buffer.fromstring(struct.pack(
wire_format.FORMAT_UINT64_LITTLE_ENDIAN, unsigned_value))
def AppendVarint32(self, value):
"""Appends a signed 32-bit integer to the internal buffer,
encoded as a varint. (Note that a negative varint32 will
always require 10 bytes of space.)
"""
if not wire_format.INT32_MIN <= value <= wire_format.INT32_MAX:
raise message.EncodeError('Value out of range: %d' % value)
self.AppendVarint64(value)
def AppendVarUInt32(self, value):
"""Appends an unsigned 32-bit integer to the internal buffer,
encoded as a varint.
"""
if not 0 <= value <= wire_format.UINT32_MAX:
raise message.EncodeError('Value out of range: %d' % value)
self.AppendVarUInt64(value)
def AppendVarint64(self, value):
"""Appends a signed 64-bit integer to the internal buffer,
encoded as a varint.
"""
if not wire_format.INT64_MIN <= value <= wire_format.INT64_MAX:
raise message.EncodeError('Value out of range: %d' % value)
if value < 0:
value += (1 << 64)
self.AppendVarUInt64(value)
def AppendVarUInt64(self, unsigned_value):
"""Appends an unsigned 64-bit integer to the internal buffer,
encoded as a varint.
"""
if not 0 <= unsigned_value <= wire_format.UINT64_MAX:
raise message.EncodeError('Value out of range: %d' % unsigned_value)
while True:
bits = unsigned_value & 0x7f
unsigned_value >>= 7
if not unsigned_value:
self._buffer.append(bits)
break
self._buffer.append(0x80|bits)
def ToString(self):
"""Returns a string containing the bytes in our internal buffer."""
return self._buffer.tostring()

View File

@ -1,268 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides type checking routines.
This module defines type checking utilities in the forms of dictionaries:
VALUE_CHECKERS: A dictionary of field types and a value validation object.
TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing
function.
TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization
function.
FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their
coresponding wire types.
TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization
function.
"""
__author__ = 'robinson@google.com (Will Robinson)'
from froofle.protobuf.internal import decoder
from froofle.protobuf.internal import encoder
from froofle.protobuf.internal import wire_format
from froofle.protobuf import descriptor
_FieldDescriptor = descriptor.FieldDescriptor
def GetTypeChecker(cpp_type, field_type):
"""Returns a type checker for a message field of the specified types.
Args:
cpp_type: C++ type of the field (see descriptor.py).
field_type: Protocol message field type (see descriptor.py).
Returns:
An instance of TypeChecker which can be used to verify the types
of values assigned to a field of the specified type.
"""
if (cpp_type == _FieldDescriptor.CPPTYPE_STRING and
field_type == _FieldDescriptor.TYPE_STRING):
return UnicodeValueChecker()
return _VALUE_CHECKERS[cpp_type]
# None of the typecheckers below make any attempt to guard against people
# subclassing builtin types and doing weird things. We're not trying to
# protect against malicious clients here, just people accidentally shooting
# themselves in the foot in obvious ways.
class TypeChecker(object):
"""Type checker used to catch type errors as early as possible
when the client is setting scalar fields in protocol messages.
"""
def __init__(self, *acceptable_types):
self._acceptable_types = acceptable_types
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, self._acceptable_types):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), self._acceptable_types))
raise TypeError(message)
# IntValueChecker and its subclasses perform integer type-checks
# and bounds-checks.
class IntValueChecker(object):
"""Checker used for integer fields. Performs type-check and range check."""
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, (int, long)):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (int, long)))
raise TypeError(message)
if not self._MIN <= proposed_value <= self._MAX:
raise ValueError('Value out of range: %d' % proposed_value)
class UnicodeValueChecker(object):
"""Checker used for string fields."""
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, (str, unicode)):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (str, unicode)))
raise TypeError(message)
# If the value is of type 'str' make sure that it is in 7-bit ASCII
# encoding.
if isinstance(proposed_value, str):
try:
unicode(proposed_value, 'ascii')
except UnicodeDecodeError:
raise ValueError('%.1024r isn\'t in 7-bit ASCII encoding.'
% (proposed_value))
class Int32ValueChecker(IntValueChecker):
# We're sure to use ints instead of longs here since comparison may be more
# efficient.
_MIN = -2147483648
_MAX = 2147483647
class Uint32ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 32) - 1
class Int64ValueChecker(IntValueChecker):
_MIN = -(1 << 63)
_MAX = (1 << 63) - 1
class Uint64ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 64) - 1
# Type-checkers for all scalar CPPTYPEs.
_VALUE_CHECKERS = {
_FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(),
_FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(),
_FieldDescriptor.CPPTYPE_DOUBLE: TypeChecker(
float, int, long),
_FieldDescriptor.CPPTYPE_FLOAT: TypeChecker(
float, int, long),
_FieldDescriptor.CPPTYPE_BOOL: TypeChecker(bool, int),
_FieldDescriptor.CPPTYPE_ENUM: Int32ValueChecker(),
_FieldDescriptor.CPPTYPE_STRING: TypeChecker(str),
}
# Map from field type to a function F, such that F(field_num, value)
# gives the total byte size for a value of the given type. This
# byte size includes tag information and any other additional space
# associated with serializing "value".
TYPE_TO_BYTE_SIZE_FN = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize,
_FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize,
_FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize,
_FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize,
_FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize,
_FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize,
_FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize,
_FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize,
_FieldDescriptor.TYPE_STRING: wire_format.StringByteSize,
_FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize,
_FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize,
_FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize,
_FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize,
_FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize,
_FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize,
_FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize,
_FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize,
_FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize
}
# Maps from field type to an unbound Encoder method F, such that
# F(encoder, field_number, value) will append the serialization
# of a value of this type to the encoder.
_Encoder = encoder.Encoder
TYPE_TO_SERIALIZE_METHOD = {
_FieldDescriptor.TYPE_DOUBLE: _Encoder.AppendDouble,
_FieldDescriptor.TYPE_FLOAT: _Encoder.AppendFloat,
_FieldDescriptor.TYPE_INT64: _Encoder.AppendInt64,
_FieldDescriptor.TYPE_UINT64: _Encoder.AppendUInt64,
_FieldDescriptor.TYPE_INT32: _Encoder.AppendInt32,
_FieldDescriptor.TYPE_FIXED64: _Encoder.AppendFixed64,
_FieldDescriptor.TYPE_FIXED32: _Encoder.AppendFixed32,
_FieldDescriptor.TYPE_BOOL: _Encoder.AppendBool,
_FieldDescriptor.TYPE_STRING: _Encoder.AppendString,
_FieldDescriptor.TYPE_GROUP: _Encoder.AppendGroup,
_FieldDescriptor.TYPE_MESSAGE: _Encoder.AppendMessage,
_FieldDescriptor.TYPE_BYTES: _Encoder.AppendBytes,
_FieldDescriptor.TYPE_UINT32: _Encoder.AppendUInt32,
_FieldDescriptor.TYPE_ENUM: _Encoder.AppendEnum,
_FieldDescriptor.TYPE_SFIXED32: _Encoder.AppendSFixed32,
_FieldDescriptor.TYPE_SFIXED64: _Encoder.AppendSFixed64,
_FieldDescriptor.TYPE_SINT32: _Encoder.AppendSInt32,
_FieldDescriptor.TYPE_SINT64: _Encoder.AppendSInt64,
}
# Maps from field type to expected wiretype.
FIELD_TYPE_TO_WIRE_TYPE = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_STRING:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP,
_FieldDescriptor.TYPE_MESSAGE:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_BYTES:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT,
}
# Maps from field type to an unbound Decoder method F,
# such that F(decoder) will read a field of the requested type.
#
# Note that Message and Group are intentionally missing here.
# They're handled by _RecursivelyMerge().
_Decoder = decoder.Decoder
TYPE_TO_DESERIALIZE_METHOD = {
_FieldDescriptor.TYPE_DOUBLE: _Decoder.ReadDouble,
_FieldDescriptor.TYPE_FLOAT: _Decoder.ReadFloat,
_FieldDescriptor.TYPE_INT64: _Decoder.ReadInt64,
_FieldDescriptor.TYPE_UINT64: _Decoder.ReadUInt64,
_FieldDescriptor.TYPE_INT32: _Decoder.ReadInt32,
_FieldDescriptor.TYPE_FIXED64: _Decoder.ReadFixed64,
_FieldDescriptor.TYPE_FIXED32: _Decoder.ReadFixed32,
_FieldDescriptor.TYPE_BOOL: _Decoder.ReadBool,
_FieldDescriptor.TYPE_STRING: _Decoder.ReadString,
_FieldDescriptor.TYPE_BYTES: _Decoder.ReadBytes,
_FieldDescriptor.TYPE_UINT32: _Decoder.ReadUInt32,
_FieldDescriptor.TYPE_ENUM: _Decoder.ReadEnum,
_FieldDescriptor.TYPE_SFIXED32: _Decoder.ReadSFixed32,
_FieldDescriptor.TYPE_SFIXED64: _Decoder.ReadSFixed64,
_FieldDescriptor.TYPE_SINT32: _Decoder.ReadSInt32,
_FieldDescriptor.TYPE_SINT64: _Decoder.ReadSInt64,
}

View File

@ -1,236 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Constants and static functions to support protocol buffer wire format."""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from froofle.protobuf import message
TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
_TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
# These numbers identify the wire type of a protocol buffer value.
# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
# tag-and-type to store one of these WIRETYPE_* constants.
# These values must match WireType enum in //net/proto2/public/wire_format.h.
WIRETYPE_VARINT = 0
WIRETYPE_FIXED64 = 1
WIRETYPE_LENGTH_DELIMITED = 2
WIRETYPE_START_GROUP = 3
WIRETYPE_END_GROUP = 4
WIRETYPE_FIXED32 = 5
_WIRETYPE_MAX = 5
# Bounds for various integer types.
INT32_MAX = int((1 << 31) - 1)
INT32_MIN = int(-(1 << 31))
UINT32_MAX = (1 << 32) - 1
INT64_MAX = (1 << 63) - 1
INT64_MIN = -(1 << 63)
UINT64_MAX = (1 << 64) - 1
# "struct" format strings that will encode/decode the specified formats.
FORMAT_UINT32_LITTLE_ENDIAN = '<I'
FORMAT_UINT64_LITTLE_ENDIAN = '<Q'
# We'll have to provide alternate implementations of AppendLittleEndian*() on
# any architectures where these checks fail.
if struct.calcsize(FORMAT_UINT32_LITTLE_ENDIAN) != 4:
raise AssertionError('Format "I" is not a 32-bit number.')
if struct.calcsize(FORMAT_UINT64_LITTLE_ENDIAN) != 8:
raise AssertionError('Format "Q" is not a 64-bit number.')
def PackTag(field_number, wire_type):
"""Returns an unsigned 32-bit integer that encodes the field number and
wire type information in standard protocol message wire format.
Args:
field_number: Expected to be an integer in the range [1, 1 << 29)
wire_type: One of the WIRETYPE_* constants.
"""
if not 0 <= wire_type <= _WIRETYPE_MAX:
raise message.EncodeError('Unknown wire type: %d' % wire_type)
return (field_number << TAG_TYPE_BITS) | wire_type
def UnpackTag(tag):
"""The inverse of PackTag(). Given an unsigned 32-bit number,
returns a (field_number, wire_type) tuple.
"""
return (tag >> TAG_TYPE_BITS), (tag & _TAG_TYPE_MASK)
def ZigZagEncode(value):
"""ZigZag Transform: Encodes signed integers so that they can be
effectively used with varint encoding. See wire_format.h for
more details.
"""
if value >= 0:
return value << 1
return (value << 1) ^ (~0)
def ZigZagDecode(value):
"""Inverse of ZigZagEncode()."""
if not value & 0x1:
return value >> 1
return (value >> 1) ^ (~0)
# The *ByteSize() functions below return the number of bytes required to
# serialize "field number + type" information and then serialize the value.
def Int32ByteSize(field_number, int32):
return Int64ByteSize(field_number, int32)
def Int64ByteSize(field_number, int64):
# Have to convert to uint before calling UInt64ByteSize().
return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
def UInt32ByteSize(field_number, uint32):
return UInt64ByteSize(field_number, uint32)
def UInt64ByteSize(field_number, uint64):
return _TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
def SInt32ByteSize(field_number, int32):
return UInt32ByteSize(field_number, ZigZagEncode(int32))
def SInt64ByteSize(field_number, int64):
return UInt64ByteSize(field_number, ZigZagEncode(int64))
def Fixed32ByteSize(field_number, fixed32):
return _TagByteSize(field_number) + 4
def Fixed64ByteSize(field_number, fixed64):
return _TagByteSize(field_number) + 8
def SFixed32ByteSize(field_number, sfixed32):
return _TagByteSize(field_number) + 4
def SFixed64ByteSize(field_number, sfixed64):
return _TagByteSize(field_number) + 8
def FloatByteSize(field_number, flt):
return _TagByteSize(field_number) + 4
def DoubleByteSize(field_number, double):
return _TagByteSize(field_number) + 8
def BoolByteSize(field_number, b):
return _TagByteSize(field_number) + 1
def EnumByteSize(field_number, enum):
return UInt32ByteSize(field_number, enum)
def StringByteSize(field_number, string):
return BytesByteSize(field_number, string.encode('utf-8'))
def BytesByteSize(field_number, b):
return (_TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(len(b))
+ len(b))
def GroupByteSize(field_number, message):
return (2 * _TagByteSize(field_number) # START and END group.
+ message.ByteSize())
def MessageByteSize(field_number, message):
return (_TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(message.ByteSize())
+ message.ByteSize())
def MessageSetItemByteSize(field_number, msg):
# First compute the sizes of the tags.
# There are 2 tags for the beginning and ending of the repeated group, that
# is field number 1, one with field number 2 (type_id) and one with field
# number 3 (message).
total_size = (2 * _TagByteSize(1) + _TagByteSize(2) + _TagByteSize(3))
# Add the number of bytes for type_id.
total_size += _VarUInt64ByteSizeNoTag(field_number)
message_size = msg.ByteSize()
# The number of bytes for encoding the length of the message.
total_size += _VarUInt64ByteSizeNoTag(message_size)
# The size of the message.
total_size += message_size
return total_size
# Private helper functions for the *ByteSize() functions above.
def _TagByteSize(field_number):
"""Returns the bytes required to serialize a tag with this field number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
def _VarUInt64ByteSizeNoTag(uint64):
"""Returns the bytes required to serialize a single varint.
uint64 must be unsigned.
"""
if uint64 > UINT64_MAX:
raise message.EncodeError('Value out of range: %d' % uint64)
bytes = 1
while uint64 > 0x7f:
bytes += 1
uint64 >>= 7
return bytes

View File

@ -1,246 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# TODO(robinson): We should just make these methods all "pure-virtual" and move
# all implementation out, into reflection.py for now.
"""Contains an abstract base class for protocol messages."""
__author__ = 'robinson@google.com (Will Robinson)'
from froofle.protobuf import text_format
class Error(Exception): pass
class DecodeError(Error): pass
class EncodeError(Error): pass
class Message(object):
"""Abstract base class for protocol messages.
Protocol message classes are almost always generated by the protocol
compiler. These generated types subclass Message and implement the methods
shown below.
TODO(robinson): Link to an HTML document here.
TODO(robinson): Document that instances of this class will also
have an Extensions attribute with __getitem__ and __setitem__.
Again, not sure how to best convey this.
TODO(robinson): Document that the class must also have a static
RegisterExtension(extension_field) method.
Not sure how to best express at this point.
"""
# TODO(robinson): Document these fields and methods.
__slots__ = []
DESCRIPTOR = None
def __eq__(self, other_msg):
raise NotImplementedError
def __ne__(self, other_msg):
# Can't just say self != other_msg, since that would infinitely recurse. :)
return not self == other_msg
def __str__(self):
return text_format.MessageToString(self)
def MergeFrom(self, other_msg):
"""Merges the contents of the specified message into current message.
This method merges the contents of the specified message into the current
message. Singular fields that are set in the specified message overwrite
the corresponding fields in the current message. Repeated fields are
appended. Singular sub-messages and groups are recursively merged.
Args:
other_msg: Message to merge into the current message.
"""
raise NotImplementedError
def CopyFrom(self, other_msg):
"""Copies the content of the specified message into the current message.
The method clears the current message and then merges the specified
message using MergeFrom.
Args:
other_msg: Message to copy into the current one.
"""
if self == other_msg:
return
self.Clear()
self.MergeFrom(other_msg)
def Clear(self):
"""Clears all data that was set in the message."""
raise NotImplementedError
def IsInitialized(self):
"""Checks if the message is initialized.
Returns:
The method returns True if the message is initialized (i.e. all of its
required fields are set).
"""
raise NotImplementedError
# TODO(robinson): MergeFromString() should probably return None and be
# implemented in terms of a helper that returns the # of bytes read. Our
# deserialization routines would use the helper when recursively
# deserializing, but the end user would almost always just want the no-return
# MergeFromString().
def MergeFromString(self, serialized):
"""Merges serialized protocol buffer data into this message.
When we find a field in |serialized| that is already present
in this message:
- If it's a "repeated" field, we append to the end of our list.
- Else, if it's a scalar, we overwrite our field.
- Else, (it's a nonrepeated composite), we recursively merge
into the existing composite.
TODO(robinson): Document handling of unknown fields.
Args:
serialized: Any object that allows us to call buffer(serialized)
to access a string of bytes using the buffer interface.
TODO(robinson): When we switch to a helper, this will return None.
Returns:
The number of bytes read from |serialized|.
For non-group messages, this will always be len(serialized),
but for messages which are actually groups, this will
generally be less than len(serialized), since we must
stop when we reach an END_GROUP tag. Note that if
we *do* stop because of an END_GROUP tag, the number
of bytes returned does not include the bytes
for the END_GROUP tag information.
"""
raise NotImplementedError
def ParseFromString(self, serialized):
"""Like MergeFromString(), except we clear the object first."""
self.Clear()
self.MergeFromString(serialized)
def SerializeToString(self):
"""Serializes the protocol message to a binary string.
Returns:
A binary string representation of the message if all of the required
fields in the message are set (i.e. the message is initialized).
Raises:
message.EncodeError if the message isn't initialized.
"""
raise NotImplementedError
def SerializePartialToString(self):
"""Serializes the protocol message to a binary string.
This method is similar to SerializeToString but doesn't check if the
message is initialized.
Returns:
A string representation of the partial message.
"""
raise NotImplementedError
# TODO(robinson): Decide whether we like these better
# than auto-generated has_foo() and clear_foo() methods
# on the instances themselves. This way is less consistent
# with C++, but it makes reflection-type access easier and
# reduces the number of magically autogenerated things.
#
# TODO(robinson): Be sure to document (and test) exactly
# which field names are accepted here. Are we case-sensitive?
# What do we do with fields that share names with Python keywords
# like 'lambda' and 'yield'?
#
# nnorwitz says:
# """
# Typically (in python), an underscore is appended to names that are
# keywords. So they would become lambda_ or yield_.
# """
def ListFields(self, field_name):
"""Returns a list of (FieldDescriptor, value) tuples for all
fields in the message which are not empty. A singular field is non-empty
if HasField() would return true, and a repeated field is non-empty if
it contains at least one element. The fields are ordered by field
number"""
raise NotImplementedError
def HasField(self, field_name):
raise NotImplementedError
def ClearField(self, field_name):
raise NotImplementedError
def HasExtension(self, extension_handle):
raise NotImplementedError
def ClearExtension(self, extension_handle):
raise NotImplementedError
def ByteSize(self):
"""Returns the serialized size of this message.
Recursively calls ByteSize() on all contained messages.
"""
raise NotImplementedError
def _SetListener(self, message_listener):
"""Internal method used by the protocol message implementation.
Clients should not call this directly.
Sets a listener that this message will call on certain state transitions.
The purpose of this method is to register back-edges from children to
parents at runtime, for the purpose of setting "has" bits and
byte-size-dirty bits in the parent and ancestor objects whenever a child or
descendant object is modified.
If the client wants to disconnect this Message from the object tree, she
explicitly sets callback to None.
If message_listener is None, unregisters any existing listener. Otherwise,
message_listener must implement the MessageListener interface in
internal/message_listener.py, and we discard any listener registered
via a previous _SetListener() call.
"""
raise NotImplementedError

File diff suppressed because it is too large Load Diff

View File

@ -1,208 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Declares the RPC service interfaces.
This module declares the abstract interfaces underlying proto2 RPC
services. These are intented to be independent of any particular RPC
implementation, so that proto2 services can be used on top of a variety
of implementations.
"""
__author__ = 'petar@google.com (Petar Petrov)'
class Service(object):
"""Abstract base interface for protocol-buffer-based RPC services.
Services themselves are abstract classes (implemented either by servers or as
stubs), but they subclass this base interface. The methods of this
interface can be used to call the methods of the service without knowing
its exact type at compile time (analogous to the Message interface).
"""
def GetDescriptor(self):
"""Retrieves this service's descriptor."""
raise NotImplementedError
def CallMethod(self, method_descriptor, rpc_controller,
request, done):
"""Calls a method of the service specified by method_descriptor.
Preconditions:
* method_descriptor.service == GetDescriptor
* request is of the exact same classes as returned by
GetRequestClass(method).
* After the call has started, the request must not be modified.
* "rpc_controller" is of the correct type for the RPC implementation being
used by this Service. For stubs, the "correct type" depends on the
RpcChannel which the stub is using.
Postconditions:
* "done" will be called when the method is complete. This may be
before CallMethod() returns or it may be at some point in the future.
"""
raise NotImplementedError
def GetRequestClass(self, method_descriptor):
"""Returns the class of the request message for the specified method.
CallMethod() requires that the request is of a particular subclass of
Message. GetRequestClass() gets the default instance of this required
type.
Example:
method = service.GetDescriptor().FindMethodByName("Foo")
request = stub.GetRequestClass(method)()
request.ParseFromString(input)
service.CallMethod(method, request, callback)
"""
raise NotImplementedError
def GetResponseClass(self, method_descriptor):
"""Returns the class of the response message for the specified method.
This method isn't really needed, as the RpcChannel's CallMethod constructs
the response protocol message. It's provided anyway in case it is useful
for the caller to know the response type in advance.
"""
raise NotImplementedError
class RpcController(object):
"""An RpcController mediates a single method call.
The primary purpose of the controller is to provide a way to manipulate
settings specific to the RPC implementation and to find out about RPC-level
errors. The methods provided by the RpcController interface are intended
to be a "least common denominator" set of features which we expect all
implementations to support. Specific implementations may provide more
advanced features (e.g. deadline propagation).
"""
# Client-side methods below
def Reset(self):
"""Resets the RpcController to its initial state.
After the RpcController has been reset, it may be reused in
a new call. Must not be called while an RPC is in progress.
"""
raise NotImplementedError
def Failed(self):
"""Returns true if the call failed.
After a call has finished, returns true if the call failed. The possible
reasons for failure depend on the RPC implementation. Failed() must not
be called before a call has finished. If Failed() returns true, the
contents of the response message are undefined.
"""
raise NotImplementedError
def ErrorText(self):
"""If Failed is true, returns a human-readable description of the error."""
raise NotImplementedError
def StartCancel(self):
"""Initiate cancellation.
Advises the RPC system that the caller desires that the RPC call be
canceled. The RPC system may cancel it immediately, may wait awhile and
then cancel it, or may not even cancel the call at all. If the call is
canceled, the "done" callback will still be called and the RpcController
will indicate that the call failed at that time.
"""
raise NotImplementedError
# Server-side methods below
def SetFailed(self, reason):
"""Sets a failure reason.
Causes Failed() to return true on the client side. "reason" will be
incorporated into the message returned by ErrorText(). If you find
you need to return machine-readable information about failures, you
should incorporate it into your response protocol buffer and should
NOT call SetFailed().
"""
raise NotImplementedError
def IsCanceled(self):
"""Checks if the client cancelled the RPC.
If true, indicates that the client canceled the RPC, so the server may
as well give up on replying to it. The server should still call the
final "done" callback.
"""
raise NotImplementedError
def NotifyOnCancel(self, callback):
"""Sets a callback to invoke on cancel.
Asks that the given callback be called when the RPC is canceled. The
callback will always be called exactly once. If the RPC completes without
being canceled, the callback will be called after completion. If the RPC
has already been canceled when NotifyOnCancel() is called, the callback
will be called immediately.
NotifyOnCancel() must be called no more than once per request.
"""
raise NotImplementedError
class RpcChannel(object):
"""Abstract interface for an RPC channel.
An RpcChannel represents a communication line to a service which can be used
to call that service's methods. The service may be running on another
machine. Normally, you should not use an RpcChannel directly, but instead
construct a stub {@link Service} wrapping it. Example:
Example:
RpcChannel channel = rpcImpl.Channel("remotehost.example.com:1234")
RpcController controller = rpcImpl.Controller()
MyService service = MyService_Stub(channel)
service.MyMethod(controller, request, callback)
"""
def CallMethod(self, method_descriptor, rpc_controller,
request, response_class, done):
"""Calls the method identified by the descriptor.
Call the given method of the remote service. The signature of this
procedure looks the same as Service.CallMethod(), but the requirements
are less strict in one important way: the request object doesn't have to
be of any specific class as long as its descriptor is method.input_type.
"""
raise NotImplementedError

View File

@ -1,289 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains metaclasses used to create protocol service and service stub
classes from ServiceDescriptor objects at runtime.
The GeneratedServiceType and GeneratedServiceStubType metaclasses are used to
inject all useful functionality into the classes output by the protocol
compiler at compile-time.
"""
__author__ = 'petar@google.com (Petar Petrov)'
class GeneratedServiceType(type):
"""Metaclass for service classes created at runtime from ServiceDescriptors.
Implementations for all methods described in the Service class are added here
by this class. We also create properties to allow getting/setting all fields
in the protocol message.
The protocol compiler currently uses this metaclass to create protocol service
classes at runtime. Clients can also manually create their own classes at
runtime, as in this example:
mydescriptor = ServiceDescriptor(.....)
class MyProtoService(service.Service):
__metaclass__ = GeneratedServiceType
DESCRIPTOR = mydescriptor
myservice_instance = MyProtoService()
...
"""
_DESCRIPTOR_KEY = 'DESCRIPTOR'
def __init__(cls, name, bases, dictionary):
"""Creates a message service class.
Args:
name: Name of the class (ignored, but required by the metaclass
protocol).
bases: Base classes of the class being constructed.
dictionary: The class dictionary of the class being constructed.
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
describing this protocol service type.
"""
# Don't do anything if this class doesn't have a descriptor. This happens
# when a service class is subclassed.
if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary:
return
descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY]
service_builder = _ServiceBuilder(descriptor)
service_builder.BuildService(cls)
class GeneratedServiceStubType(GeneratedServiceType):
"""Metaclass for service stubs created at runtime from ServiceDescriptors.
This class has similar responsibilities as GeneratedServiceType, except that
it creates the service stub classes.
"""
_DESCRIPTOR_KEY = 'DESCRIPTOR'
def __init__(cls, name, bases, dictionary):
"""Creates a message service stub class.
Args:
name: Name of the class (ignored, here).
bases: Base classes of the class being constructed.
dictionary: The class dictionary of the class being constructed.
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
describing this protocol service type.
"""
super(GeneratedServiceStubType, cls).__init__(name, bases, dictionary)
# Don't do anything if this class doesn't have a descriptor. This happens
# when a service stub is subclassed.
if GeneratedServiceStubType._DESCRIPTOR_KEY not in dictionary:
return
descriptor = dictionary[GeneratedServiceStubType._DESCRIPTOR_KEY]
service_stub_builder = _ServiceStubBuilder(descriptor)
service_stub_builder.BuildServiceStub(cls)
class _ServiceBuilder(object):
"""This class constructs a protocol service class using a service descriptor.
Given a service descriptor, this class constructs a class that represents
the specified service descriptor. One service builder instance constructs
exactly one service class. That means all instances of that class share the
same builder.
"""
def __init__(self, service_descriptor):
"""Initializes an instance of the service class builder.
Args:
service_descriptor: ServiceDescriptor to use when constructing the
service class.
"""
self.descriptor = service_descriptor
def BuildService(self, cls):
"""Constructs the service class.
Args:
cls: The class that will be constructed.
"""
# CallMethod needs to operate with an instance of the Service class. This
# internal wrapper function exists only to be able to pass the service
# instance to the method that does the real CallMethod work.
def _WrapCallMethod(srvc, method_descriptor,
rpc_controller, request, callback):
self._CallMethod(srvc, method_descriptor,
rpc_controller, request, callback)
self.cls = cls
cls.CallMethod = _WrapCallMethod
cls.GetDescriptor = self._GetDescriptor
cls.GetRequestClass = self._GetRequestClass
cls.GetResponseClass = self._GetResponseClass
for method in self.descriptor.methods:
setattr(cls, method.name, self._GenerateNonImplementedMethod(method))
def _GetDescriptor(self):
"""Retrieves the service descriptor.
Returns:
The descriptor of the service (of type ServiceDescriptor).
"""
return self.descriptor
def _CallMethod(self, srvc, method_descriptor,
rpc_controller, request, callback):
"""Calls the method described by a given method descriptor.
Args:
srvc: Instance of the service for which this method is called.
method_descriptor: Descriptor that represent the method to call.
rpc_controller: RPC controller to use for this method's execution.
request: Request protocol message.
callback: A callback to invoke after the method has completed.
"""
if method_descriptor.containing_service != self.descriptor:
raise RuntimeError(
'CallMethod() given method descriptor for wrong service type.')
method = getattr(srvc, method_descriptor.name)
method(rpc_controller, request, callback)
def _GetRequestClass(self, method_descriptor):
"""Returns the class of the request protocol message.
Args:
method_descriptor: Descriptor of the method for which to return the
request protocol message class.
Returns:
A class that represents the input protocol message of the specified
method.
"""
if method_descriptor.containing_service != self.descriptor:
raise RuntimeError(
'GetRequestClass() given method descriptor for wrong service type.')
return method_descriptor.input_type._concrete_class
def _GetResponseClass(self, method_descriptor):
"""Returns the class of the response protocol message.
Args:
method_descriptor: Descriptor of the method for which to return the
response protocol message class.
Returns:
A class that represents the output protocol message of the specified
method.
"""
if method_descriptor.containing_service != self.descriptor:
raise RuntimeError(
'GetResponseClass() given method descriptor for wrong service type.')
return method_descriptor.output_type._concrete_class
def _GenerateNonImplementedMethod(self, method):
"""Generates and returns a method that can be set for a service methods.
Args:
method: Descriptor of the service method for which a method is to be
generated.
Returns:
A method that can be added to the service class.
"""
return lambda inst, rpc_controller, request, callback: (
self._NonImplementedMethod(method.name, rpc_controller, callback))
def _NonImplementedMethod(self, method_name, rpc_controller, callback):
"""The body of all methods in the generated service class.
Args:
method_name: Name of the method being executed.
rpc_controller: RPC controller used to execute this method.
callback: A callback which will be invoked when the method finishes.
"""
rpc_controller.SetFailed('Method %s not implemented.' % method_name)
callback(None)
class _ServiceStubBuilder(object):
"""Constructs a protocol service stub class using a service descriptor.
Given a service descriptor, this class constructs a suitable stub class.
A stub is just a type-safe wrapper around an RpcChannel which emulates a
local implementation of the service.
One service stub builder instance constructs exactly one class. It means all
instances of that class share the same service stub builder.
"""
def __init__(self, service_descriptor):
"""Initializes an instance of the service stub class builder.
Args:
service_descriptor: ServiceDescriptor to use when constructing the
stub class.
"""
self.descriptor = service_descriptor
def BuildServiceStub(self, cls):
"""Constructs the stub class.
Args:
cls: The class that will be constructed.
"""
def _ServiceStubInit(stub, rpc_channel):
stub.rpc_channel = rpc_channel
self.cls = cls
cls.__init__ = _ServiceStubInit
for method in self.descriptor.methods:
setattr(cls, method.name, self._GenerateStubMethod(method))
def _GenerateStubMethod(self, method):
return lambda inst, rpc_controller, request, callback: self._StubMethod(
inst, method, rpc_controller, request, callback)
def _StubMethod(self, stub, method_descriptor,
rpc_controller, request, callback):
"""The body of all service methods in the generated stub class.
Args:
stub: Stub instance.
method_descriptor: Descriptor of the invoked method.
rpc_controller: Rpc controller to execute the method.
request: Request protocol message.
callback: A callback to execute when the method finishes.
"""
stub.rpc_channel.CallMethod(
method_descriptor, rpc_controller, request,
method_descriptor.output_type._concrete_class, callback)

View File

@ -1,125 +0,0 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains routines for printing protocol messages in text format."""
__author__ = 'kenton@google.com (Kenton Varda)'
import cStringIO
from froofle.protobuf import descriptor
__all__ = [ 'MessageToString', 'PrintMessage', 'PrintField', 'PrintFieldValue' ]
def MessageToString(message):
out = cStringIO.StringIO()
PrintMessage(message, out)
result = out.getvalue()
out.close()
return result
def PrintMessage(message, out, indent = 0):
for field, value in message.ListFields():
if field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
for element in value:
PrintField(field, element, out, indent)
else:
PrintField(field, value, out, indent)
def PrintField(field, value, out, indent = 0):
"""Print a single field name/value pair. For repeated fields, the value
should be a single element."""
out.write(' ' * indent);
if field.is_extension:
out.write('[')
if (field.containing_type.GetOptions().message_set_wire_format and
field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and
field.message_type == field.extension_scope and
field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL):
out.write(field.message_type.full_name)
else:
out.write(field.full_name)
out.write(']')
elif field.type == descriptor.FieldDescriptor.TYPE_GROUP:
# For groups, use the capitalized name.
out.write(field.message_type.name)
else:
out.write(field.name)
if field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
# The colon is optional in this case, but our cross-language golden files
# don't include it.
out.write(': ')
PrintFieldValue(field, value, out, indent)
out.write('\n')
def PrintFieldValue(field, value, out, indent = 0):
"""Print a single field value (not including name). For repeated fields,
the value should be a single element."""
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
out.write(' {\n')
PrintMessage(value, out, indent + 2)
out.write(' ' * indent + '}')
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM:
out.write(field.enum_type.values_by_number[value].name)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING:
out.write('\"')
out.write(_CEscape(value))
out.write('\"')
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL:
if value:
out.write("true")
else:
out.write("false")
else:
out.write(str(value))
# text.encode('string_escape') does not seem to satisfy our needs as it
# encodes unprintable characters using two-digit hex escapes whereas our
# C++ unescaping function allows hex escapes to be any length. So,
# "\0011".encode('string_escape') ends up being "\\x011", which will be
# decoded in C++ as a single-character string with char code 0x11.
def _CEscape(text):
def escape(c):
o = ord(c)
if o == 10: return r"\n" # optional escape
if o == 13: return r"\r" # optional escape
if o == 9: return r"\t" # optional escape
if o == 39: return r"\'" # optional escape
if o == 34: return r'\"' # necessary escape
if o == 92: return r"\\" # necessary escape
if o >= 127 or o < 32: return "\\%03o" % o # necessary escapes
return c
return "".join([escape(c) for c in text])

View File

@ -1,174 +0,0 @@
#
# Copyright (C) 2008 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import getpass
import os
import sys
from tempfile import mkstemp
from codereview.proto_client import HttpRpc, Proxy
from codereview.review_pb2 import ReviewService_Stub
from codereview.upload_bundle_pb2 import *
from git_command import GitCommand
from error import UploadError
try:
import readline
except ImportError:
pass
MAX_SEGMENT_SIZE = 1020 * 1024
def _GetRpcServer(email, server, save_cookies):
"""Returns an RpcServer.
Returns:
A new RpcServer, on which RPC calls can be made.
"""
def GetUserCredentials():
"""Prompts the user for a username and password."""
e = email
if e is None:
e = raw_input("Email: ").strip()
password = getpass.getpass("Password for %s: " % e)
return (e, password)
# If this is the dev_appserver, use fake authentication.
lc_server = server.lower()
if lc_server == "localhost" or lc_server.startswith("localhost:"):
if email is None:
email = "test@example.com"
server = HttpRpc(
server,
lambda: (email, "password"),
extra_headers={"Cookie":
'dev_appserver_login="%s:False"' % email})
# Don't try to talk to ClientLogin.
server.authenticated = True
return server
if save_cookies:
cookie_file = ".gerrit_cookies"
else:
cookie_file = None
return HttpRpc(server, GetUserCredentials,
cookie_file=cookie_file)
def UploadBundle(project,
server,
email,
dest_project,
dest_branch,
src_branch,
bases,
people,
replace_changes = None,
save_cookies=True):
srv = _GetRpcServer(email, server, save_cookies)
review = Proxy(ReviewService_Stub(srv))
tmp_fd, tmp_bundle = mkstemp(".bundle", ".gpq")
os.close(tmp_fd)
srcid = project.bare_git.rev_parse(src_branch)
revlist = project._revlist(src_branch, *bases)
if srcid not in revlist:
# This can happen if src_branch is an annotated tag
#
revlist.append(srcid)
revlist_size = len(revlist) * 42
try:
cmd = ['bundle', 'create', tmp_bundle, src_branch]
cmd.extend(bases)
if GitCommand(project, cmd).Wait() != 0:
raise UploadError('cannot create bundle')
fd = open(tmp_bundle, "rb")
bundle_id = None
segment_id = 0
next_data = fd.read(MAX_SEGMENT_SIZE - revlist_size)
while True:
this_data = next_data
next_data = fd.read(MAX_SEGMENT_SIZE)
segment_id += 1
if bundle_id is None:
req = UploadBundleRequest()
req.dest_project = str(dest_project)
req.dest_branch = str(dest_branch)
for e in people[0]:
req.reviewers.append(e)
for e in people[1]:
req.cc.append(e)
for c in revlist:
req.contained_object.append(c)
if replace_changes:
for change_id,commit_id in replace_changes.iteritems():
r = req.replace.add()
r.change_id = change_id
r.object_id = commit_id
else:
req = UploadBundleContinue()
req.bundle_id = bundle_id
req.segment_id = segment_id
req.bundle_data = this_data
if len(next_data) > 0:
req.partial_upload = True
else:
req.partial_upload = False
if bundle_id is None:
rsp = review.UploadBundle(req)
else:
rsp = review.ContinueBundle(req)
if rsp.status_code == UploadBundleResponse.CONTINUE:
bundle_id = rsp.bundle_id
elif rsp.status_code == UploadBundleResponse.RECEIVED:
bundle_id = rsp.bundle_id
return bundle_id
else:
if rsp.status_code == UploadBundleResponse.UNKNOWN_PROJECT:
reason = 'unknown project "%s"' % dest_project
elif rsp.status_code == UploadBundleResponse.UNKNOWN_BRANCH:
reason = 'unknown branch "%s"' % dest_branch
elif rsp.status_code == UploadBundleResponse.UNKNOWN_BUNDLE:
reason = 'unknown bundle'
elif rsp.status_code == UploadBundleResponse.NOT_BUNDLE_OWNER:
reason = 'not bundle owner'
elif rsp.status_code == UploadBundleResponse.BUNDLE_CLOSED:
reason = 'bundle closed'
elif rsp.status_code == UploadBundleResponse.UNAUTHORIZED_USER:
reason = ('Unauthorized user. Visit http://%s/hello to sign up.'
% server)
elif rsp.status_code == UploadBundleResponse.UNKNOWN_CHANGE:
reason = 'invalid change id'
elif rsp.status_code == UploadBundleResponse.CHANGE_CLOSED:
reason = 'one or more changes are closed'
elif rsp.status_code == UploadBundleResponse.UNKNOWN_EMAIL:
emails = [x for x in rsp.invalid_reviewers] + [
x for x in rsp.invalid_cc]
reason = 'invalid email addresses: %s' % ", ".join(emails)
else:
reason = 'unknown error ' + str(rsp.status_code)
raise UploadError(reason)
finally:
os.unlink(tmp_bundle)

View File

@ -16,20 +16,63 @@
import os
import sys
import subprocess
import tempfile
from signal import SIGTERM
from error import GitError
from trace import REPO_TRACE, IsTrace, Trace
GIT = 'git'
MIN_GIT_VERSION = (1, 5, 4)
GIT_DIR = 'GIT_DIR'
REPO_TRACE = 'REPO_TRACE'
LAST_GITDIR = None
LAST_CWD = None
try:
TRACE = os.environ[REPO_TRACE] == '1'
except KeyError:
TRACE = False
_ssh_proxy_path = None
_ssh_sock_path = None
_ssh_clients = []
def ssh_sock(create=True):
global _ssh_sock_path
if _ssh_sock_path is None:
if not create:
return None
dir = '/tmp'
if not os.path.exists(dir):
dir = tempfile.gettempdir()
_ssh_sock_path = os.path.join(
tempfile.mkdtemp('', 'ssh-', dir),
'master-%r@%h:%p')
return _ssh_sock_path
def _ssh_proxy():
global _ssh_proxy_path
if _ssh_proxy_path is None:
_ssh_proxy_path = os.path.join(
os.path.dirname(__file__),
'git_ssh')
return _ssh_proxy_path
def _add_ssh_client(p):
_ssh_clients.append(p)
def _remove_ssh_client(p):
try:
_ssh_clients.remove(p)
except ValueError:
pass
def terminate_ssh_clients():
global _ssh_clients
for p in _ssh_clients:
try:
os.kill(p.pid, SIGTERM)
p.wait()
except OSError:
pass
_ssh_clients = []
_git_version = None
class _GitCall(object):
def version(self):
@ -38,6 +81,21 @@ class _GitCall(object):
return p.stdout
return None
def version_tuple(self):
global _git_version
if _git_version is None:
ver_str = git.version()
if ver_str.startswith('git version '):
_git_version = tuple(
map(lambda x: int(x),
ver_str[len('git version '):].strip().split('.')[0:3]
))
else:
print >>sys.stderr, 'fatal: "%s" unsupported' % ver_str
sys.exit(1)
return _git_version
def __getattr__(self, name):
name = name.replace('_','-')
def fun(*cmdv):
@ -47,6 +105,19 @@ class _GitCall(object):
return fun
git = _GitCall()
def git_require(min_version, fail=False):
git_version = git.version_tuple()
if min_version <= git_version:
return True
if fail:
need = '.'.join(map(lambda x: str(x), min_version))
print >>sys.stderr, 'fatal: git %s or later required' % need
sys.exit(1)
return False
def _setenv(env, name, value):
env[name] = value.encode()
class GitCommand(object):
def __init__(self,
project,
@ -56,9 +127,10 @@ class GitCommand(object):
capture_stdout = False,
capture_stderr = False,
disable_editor = False,
ssh_proxy = False,
cwd = None,
gitdir = None):
env = dict(os.environ)
env = os.environ.copy()
for e in [REPO_TRACE,
GIT_DIR,
@ -71,7 +143,10 @@ class GitCommand(object):
del env[e]
if disable_editor:
env['GIT_EDITOR'] = ':'
_setenv(env, 'GIT_EDITOR', ':')
if ssh_proxy:
_setenv(env, 'REPO_SSH_SOCK', ssh_sock())
_setenv(env, 'GIT_SSH', _ssh_proxy())
if project:
if not cwd:
@ -80,9 +155,11 @@ class GitCommand(object):
gitdir = project.gitdir
command = [GIT]
if 'http_proxy' in env and 'darwin' == sys.platform:
command.extend(['-c', 'http.proxy=' + env['http_proxy']])
if bare:
if gitdir:
env[GIT_DIR] = gitdir
_setenv(env, GIT_DIR, gitdir)
cwd = None
command.extend(cmdv)
@ -101,7 +178,7 @@ class GitCommand(object):
else:
stderr = None
if TRACE:
if IsTrace():
global LAST_CWD
global LAST_GITDIR
@ -127,7 +204,7 @@ class GitCommand(object):
dbg += ' 1>|'
if stderr == subprocess.PIPE:
dbg += ' 2>|'
print >>sys.stderr, dbg
Trace('%s', dbg)
try:
p = subprocess.Popen(command,
@ -139,26 +216,17 @@ class GitCommand(object):
except Exception, e:
raise GitError('%s: %s' % (command[1], e))
if ssh_proxy:
_add_ssh_client(p)
self.process = p
self.stdin = p.stdin
def Wait(self):
p = self.process
if p.stdin:
p.stdin.close()
self.stdin = None
if p.stdout:
self.stdout = p.stdout.read()
p.stdout.close()
else:
p.stdout = None
if p.stderr:
self.stderr = p.stderr.read()
p.stderr.close()
else:
p.stderr = None
return self.process.wait()
try:
p = self.process
(self.stdout, self.stderr) = p.communicate()
rc = p.returncode
finally:
_remove_ssh_client(p)
return rc

View File

@ -13,19 +13,42 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import cPickle
import os
import re
import subprocess
import sys
from error import GitError
try:
import threading as _threading
except ImportError:
import dummy_threading as _threading
import time
import urllib2
from signal import SIGTERM
from error import GitError, UploadError
from trace import Trace
from git_command import GitCommand
from git_command import ssh_sock
from git_command import terminate_ssh_clients
R_HEADS = 'refs/heads/'
R_TAGS = 'refs/tags/'
ID_RE = re.compile('^[0-9a-f]{40}$')
REVIEW_CACHE = dict()
def IsId(rev):
return ID_RE.match(rev)
def _key(name):
parts = name.split('.')
if len(parts) < 2:
return name.lower()
parts[ 0] = parts[ 0].lower()
parts[-1] = parts[-1].lower()
return '.'.join(parts)
class GitConfig(object):
_ForUser = None
@ -41,18 +64,25 @@ class GitConfig(object):
return cls(file = os.path.join(gitdir, 'config'),
defaults = defaults)
def __init__(self, file, defaults=None):
def __init__(self, file, defaults=None, pickleFile=None):
self.file = file
self.defaults = defaults
self._cache_dict = None
self._section_dict = None
self._remotes = {}
self._branches = {}
if pickleFile is None:
self._pickle = os.path.join(
os.path.dirname(self.file),
'.repopickle_' + os.path.basename(self.file))
else:
self._pickle = pickleFile
def Has(self, name, include_defaults = True):
"""Return true if this configuration file has the key.
"""
name = name.lower()
if name in self._cache:
if _key(name) in self._cache:
return True
if include_defaults and self.defaults:
return self.defaults.Has(name, include_defaults = True)
@ -80,10 +110,8 @@ class GitConfig(object):
This configuration file is used first, if the key is not
defined or all = True then the defaults are also searched.
"""
name = name.lower()
try:
v = self._cache[name]
v = self._cache[_key(name)]
except KeyError:
if self.defaults:
return self.defaults.GetString(name, all = all)
@ -107,16 +135,16 @@ class GitConfig(object):
The supplied value should be either a string,
or a list of strings (to store multiple values).
"""
name = name.lower()
key = _key(name)
try:
old = self._cache[name]
old = self._cache[key]
except KeyError:
old = []
if value is None:
if old:
del self._cache[name]
del self._cache[key]
self._do('--unset-all', name)
elif isinstance(value, list):
@ -127,13 +155,13 @@ class GitConfig(object):
self.SetString(name, value[0])
elif old != value:
self._cache[name] = list(value)
self._cache[key] = list(value)
self._do('--replace-all', name, value[0])
for i in xrange(1, len(value)):
self._do('--add', name, value[i])
elif len(old) != 1 or old[0] != value:
self._cache[name] = [value]
self._cache[key] = [value]
self._do('--replace-all', name, value)
def GetRemote(self, name):
@ -156,6 +184,47 @@ class GitConfig(object):
self._branches[b.name] = b
return b
def GetSubSections(self, section):
"""List all subsection names matching $section.*.*
"""
return self._sections.get(section, set())
def HasSection(self, section, subsection = ''):
"""Does at least one key in section.subsection exist?
"""
try:
return subsection in self._sections[section]
except KeyError:
return False
def UrlInsteadOf(self, url):
"""Resolve any url.*.insteadof references.
"""
for new_url in self.GetSubSections('url'):
old_url = self.GetString('url.%s.insteadof' % new_url)
if old_url is not None and url.startswith(old_url):
return new_url + url[len(old_url):]
return url
@property
def _sections(self):
d = self._section_dict
if d is None:
d = {}
for name in self._cache.keys():
p = name.split('.')
if 2 == len(p):
section = p[0]
subsect = ''
else:
section = p[0]
subsect = '.'.join(p[1:-1])
if section not in d:
d[section] = set()
d[section].add(subsect)
self._section_dict = d
return d
@property
def _cache(self):
if self._cache_dict is None:
@ -163,21 +232,74 @@ class GitConfig(object):
return self._cache_dict
def _Read(self):
d = self._do('--null', '--list')
c = {}
while d:
lf = d.index('\n')
nul = d.index('\0', lf + 1)
d = self._ReadPickle()
if d is None:
d = self._ReadGit()
self._SavePickle(d)
return d
key = d[0:lf]
val = d[lf + 1:nul]
def _ReadPickle(self):
try:
if os.path.getmtime(self._pickle) \
<= os.path.getmtime(self.file):
os.remove(self._pickle)
return None
except OSError:
return None
try:
Trace(': unpickle %s', self.file)
fd = open(self._pickle, 'rb')
try:
return cPickle.load(fd)
finally:
fd.close()
except EOFError:
os.remove(self._pickle)
return None
except IOError:
os.remove(self._pickle)
return None
except cPickle.PickleError:
os.remove(self._pickle)
return None
def _SavePickle(self, cache):
try:
fd = open(self._pickle, 'wb')
try:
cPickle.dump(cache, fd, cPickle.HIGHEST_PROTOCOL)
finally:
fd.close()
except IOError:
if os.path.exists(self._pickle):
os.remove(self._pickle)
except cPickle.PickleError:
if os.path.exists(self._pickle):
os.remove(self._pickle)
def _ReadGit(self):
"""
Read configuration data from git.
This internal method populates the GitConfig cache.
"""
c = {}
d = self._do('--null', '--list')
if d is None:
return c
for line in d.rstrip('\0').split('\0'):
if '\n' in line:
key, val = line.split('\n', 1)
else:
key = line
val = None
if key in c:
c[key].append(val)
else:
c[key] = [val]
d = d[nul + 1:]
return c
def _do(self, *args):
@ -250,6 +372,150 @@ class RefSpec(object):
return s
_master_processes = []
_master_keys = set()
_ssh_master = True
_master_keys_lock = None
def init_ssh():
"""Should be called once at the start of repo to init ssh master handling.
At the moment, all we do is to create our lock.
"""
global _master_keys_lock
assert _master_keys_lock is None, "Should only call init_ssh once"
_master_keys_lock = _threading.Lock()
def _open_ssh(host, port=None):
global _ssh_master
# Acquire the lock. This is needed to prevent opening multiple masters for
# the same host when we're running "repo sync -jN" (for N > 1) _and_ the
# manifest <remote fetch="ssh://xyz"> specifies a different host from the
# one that was passed to repo init.
_master_keys_lock.acquire()
try:
# Check to see whether we already think that the master is running; if we
# think it's already running, return right away.
if port is not None:
key = '%s:%s' % (host, port)
else:
key = host
if key in _master_keys:
return True
if not _ssh_master \
or 'GIT_SSH' in os.environ \
or sys.platform in ('win32', 'cygwin'):
# failed earlier, or cygwin ssh can't do this
#
return False
# We will make two calls to ssh; this is the common part of both calls.
command_base = ['ssh',
'-o','ControlPath %s' % ssh_sock(),
host]
if port is not None:
command_base[1:1] = ['-p',str(port)]
# Since the key wasn't in _master_keys, we think that master isn't running.
# ...but before actually starting a master, we'll double-check. This can
# be important because we can't tell that that 'git@myhost.com' is the same
# as 'myhost.com' where "User git" is setup in the user's ~/.ssh/config file.
check_command = command_base + ['-O','check']
try:
Trace(': %s', ' '.join(check_command))
check_process = subprocess.Popen(check_command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
check_process.communicate() # read output, but ignore it...
isnt_running = check_process.wait()
if not isnt_running:
# Our double-check found that the master _was_ infact running. Add to
# the list of keys.
_master_keys.add(key)
return True
except Exception:
# Ignore excpetions. We we will fall back to the normal command and print
# to the log there.
pass
command = command_base[:1] + \
['-M', '-N'] + \
command_base[1:]
try:
Trace(': %s', ' '.join(command))
p = subprocess.Popen(command)
except Exception, e:
_ssh_master = False
print >>sys.stderr, \
'\nwarn: cannot enable ssh control master for %s:%s\n%s' \
% (host,port, str(e))
return False
_master_processes.append(p)
_master_keys.add(key)
time.sleep(1)
return True
finally:
_master_keys_lock.release()
def close_ssh():
global _master_keys_lock
terminate_ssh_clients()
for p in _master_processes:
try:
os.kill(p.pid, SIGTERM)
p.wait()
except OSError:
pass
del _master_processes[:]
_master_keys.clear()
d = ssh_sock(create=False)
if d:
try:
os.rmdir(os.path.dirname(d))
except OSError:
pass
# We're done with the lock, so we can delete it.
_master_keys_lock = None
URI_SCP = re.compile(r'^([^@:]*@?[^:/]{1,}):')
URI_ALL = re.compile(r'^([a-z][a-z+-]*)://([^@/]*@?[^/]*)/')
def GetSchemeFromUrl(url):
m = URI_ALL.match(url)
if m:
return m.group(1)
return None
def _preconnect(url):
m = URI_ALL.match(url)
if m:
scheme = m.group(1)
host = m.group(2)
if ':' in host:
host, port = host.split(':')
else:
port = None
if scheme in ('ssh', 'git+ssh', 'ssh+git'):
return _open_ssh(host, port)
return False
m = URI_SCP.match(url)
if m:
host = m.group(1)
return _open_ssh(host)
return False
class Remote(object):
"""Configuration options related to a remote.
"""
@ -261,6 +527,84 @@ class Remote(object):
self.projectname = self._Get('projectname')
self.fetch = map(lambda x: RefSpec.FromString(x),
self._Get('fetch', all=True))
self._review_url = None
def _InsteadOf(self):
globCfg = GitConfig.ForUser()
urlList = globCfg.GetSubSections('url')
longest = ""
longestUrl = ""
for url in urlList:
key = "url." + url + ".insteadOf"
insteadOfList = globCfg.GetString(key, all=True)
for insteadOf in insteadOfList:
if self.url.startswith(insteadOf) \
and len(insteadOf) > len(longest):
longest = insteadOf
longestUrl = url
if len(longest) == 0:
return self.url
return self.url.replace(longest, longestUrl, 1)
def PreConnectFetch(self):
connectionUrl = self._InsteadOf()
return _preconnect(connectionUrl)
def ReviewUrl(self, userEmail):
if self._review_url is None:
if self.review is None:
return None
u = self.review
if not u.startswith('http:') and not u.startswith('https:'):
u = 'http://%s' % u
if u.endswith('/Gerrit'):
u = u[:len(u) - len('/Gerrit')]
if u.endswith('/ssh_info'):
u = u[:len(u) - len('/ssh_info')]
if not u.endswith('/'):
u += '/'
http_url = u
if u in REVIEW_CACHE:
self._review_url = REVIEW_CACHE[u]
elif 'REPO_HOST_PORT_INFO' in os.environ:
host, port = os.environ['REPO_HOST_PORT_INFO'].split()
self._review_url = self._SshReviewUrl(userEmail, host, port)
REVIEW_CACHE[u] = self._review_url
else:
try:
info_url = u + 'ssh_info'
info = urllib2.urlopen(info_url).read()
if '<' in info:
# Assume the server gave us some sort of HTML
# response back, like maybe a login page.
#
raise UploadError('%s: Cannot parse response' % info_url)
if info == 'NOT_AVAILABLE':
# Assume HTTP if SSH is not enabled.
self._review_url = http_url + 'p/'
else:
host, port = info.split()
self._review_url = self._SshReviewUrl(userEmail, host, port)
except urllib2.HTTPError, e:
raise UploadError('%s: %s' % (self.review, str(e)))
except urllib2.URLError, e:
raise UploadError('%s: %s' % (self.review, str(e)))
REVIEW_CACHE[u] = self._review_url
return self._review_url + self.projectname
def _SshReviewUrl(self, userEmail, host, port):
username = self._config.GetString('review.%s.username' % self.review)
if username is None:
username = userEmail.split('@')[0]
return 'ssh://%s@%s:%s/' % (username, host, port)
def ToLocal(self, rev):
"""Convert a remote revision string to something we have locally.
@ -337,11 +681,23 @@ class Branch(object):
def Save(self):
"""Save this branch back into the configuration.
"""
self._Set('merge', self.merge)
if self.remote:
self._Set('remote', self.remote.name)
if self._config.HasSection('branch', self.name):
if self.remote:
self._Set('remote', self.remote.name)
else:
self._Set('remote', None)
self._Set('merge', self.merge)
else:
self._Set('remote', None)
fd = open(self._config.file, 'ab')
try:
fd.write('[branch "%s"]\n' % self.name)
if self.remote:
fd.write('\tremote = %s\n' % self.remote.name)
if self.merge:
fd.write('\tmerge = %s\n' % self.merge)
finally:
fd.close()
def _Set(self, key, value):
key = 'branch.%s.%s' % (self.name, key)

162
git_refs.py Normal file
View File

@ -0,0 +1,162 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
from trace import Trace
HEAD = 'HEAD'
R_HEADS = 'refs/heads/'
R_TAGS = 'refs/tags/'
R_PUB = 'refs/published/'
R_M = 'refs/remotes/m/'
class GitRefs(object):
def __init__(self, gitdir):
self._gitdir = gitdir
self._phyref = None
self._symref = None
self._mtime = {}
@property
def all(self):
self._EnsureLoaded()
return self._phyref
def get(self, name):
try:
return self.all[name]
except KeyError:
return ''
def deleted(self, name):
if self._phyref is not None:
if name in self._phyref:
del self._phyref[name]
if name in self._symref:
del self._symref[name]
if name in self._mtime:
del self._mtime[name]
def symref(self, name):
try:
self._EnsureLoaded()
return self._symref[name]
except KeyError:
return ''
def _EnsureLoaded(self):
if self._phyref is None or self._NeedUpdate():
self._LoadAll()
def _NeedUpdate(self):
Trace(': scan refs %s', self._gitdir)
for name, mtime in self._mtime.iteritems():
try:
if mtime != os.path.getmtime(os.path.join(self._gitdir, name)):
return True
except OSError:
return True
return False
def _LoadAll(self):
Trace(': load refs %s', self._gitdir)
self._phyref = {}
self._symref = {}
self._mtime = {}
self._ReadPackedRefs()
self._ReadLoose('refs/')
self._ReadLoose1(os.path.join(self._gitdir, HEAD), HEAD)
scan = self._symref
attempts = 0
while scan and attempts < 5:
scan_next = {}
for name, dest in scan.iteritems():
if dest in self._phyref:
self._phyref[name] = self._phyref[dest]
else:
scan_next[name] = dest
scan = scan_next
attempts += 1
def _ReadPackedRefs(self):
path = os.path.join(self._gitdir, 'packed-refs')
try:
fd = open(path, 'rb')
mtime = os.path.getmtime(path)
except IOError:
return
except OSError:
return
try:
for line in fd:
if line[0] == '#':
continue
if line[0] == '^':
continue
line = line[:-1]
p = line.split(' ')
id = p[0]
name = p[1]
self._phyref[name] = id
finally:
fd.close()
self._mtime['packed-refs'] = mtime
def _ReadLoose(self, prefix):
base = os.path.join(self._gitdir, prefix)
for name in os.listdir(base):
p = os.path.join(base, name)
if os.path.isdir(p):
self._mtime[prefix] = os.path.getmtime(base)
self._ReadLoose(prefix + name + '/')
elif name.endswith('.lock'):
pass
else:
self._ReadLoose1(p, prefix + name)
def _ReadLoose1(self, path, name):
try:
fd = open(path, 'rb')
except:
return
try:
try:
mtime = os.path.getmtime(path)
id = fd.readline()
except:
return
finally:
fd.close()
if not id:
return
id = id[:-1]
if id.startswith('ref: '):
self._symref[name] = id[5:]
else:
self._phyref[name] = id
self._mtime[name] = mtime

2
git_ssh Executable file
View File

@ -0,0 +1,2 @@
#!/bin/sh
exec ssh -o "ControlMaster no" -o "ControlPath $REPO_SSH_SOCK" "$@"

101
hooks/commit-msg Executable file
View File

@ -0,0 +1,101 @@
#!/bin/sh
# From Gerrit Code Review 2.1.2-rc2-33-g7e30c72
#
# Part of Gerrit Code Review (http://code.google.com/p/gerrit/)
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
CHANGE_ID_AFTER="Bug|Issue"
MSG="$1"
# Check for, and add if missing, a unique Change-Id
#
add_ChangeId() {
clean_message=$(sed -e '
/^diff --git a\/.*/{
s///
q
}
/^Signed-off-by:/d
/^#/d
' "$MSG" | git stripspace)
if test -z "$clean_message"
then
return
fi
if grep -i '^Change-Id:' "$MSG" >/dev/null
then
return
fi
id=$(_gen_ChangeId)
perl -e '
$MSG = shift;
$id = shift;
$CHANGE_ID_AFTER = shift;
undef $/;
open(I, $MSG); $_ = <I>; close I;
s|^diff --git a/.*||ms;
s|^#.*$||mg;
exit unless $_;
@message = split /\n/;
$haveFooter = 0;
$startFooter = @message;
for($line = @message - 1; $line >= 0; $line--) {
$_ = $message[$line];
($haveFooter++, next) if /^[a-zA-Z0-9-]+:/;
next if /^[ []/;
$startFooter = $line if ($haveFooter && /^\r?$/);
last;
}
@footer = @message[$startFooter+1..@message];
@message = @message[0..$startFooter];
push(@footer, "") unless @footer;
for ($line = 0; $line < @footer; $line++) {
$_ = $footer[$line];
next if /^($CHANGE_ID_AFTER):/i;
last;
}
splice(@footer, $line, 0, "Change-Id: I$id");
$_ = join("\n", @message, @footer);
open(O, ">$MSG"); print O; close O;
' "$MSG" "$id" "$CHANGE_ID_AFTER"
}
_gen_ChangeIdInput() {
echo "tree $(git write-tree)"
if parent=$(git rev-parse HEAD^0 2>/dev/null)
then
echo "parent $parent"
fi
echo "author $(git var GIT_AUTHOR_IDENT)"
echo "committer $(git var GIT_COMMITTER_IDENT)"
echo
printf '%s' "$clean_message"
}
_gen_ChangeId() {
_gen_ChangeIdInput |
git hash-object -t commit --stdin
}
add_ChangeId

View File

@ -38,6 +38,11 @@ elif test -x /usr/bin/pmset && /usr/bin/pmset -g batt |
grep -q "Currently drawing from 'AC Power'"
then
exit 0
elif test -d /sys/bus/acpi/drivers/battery && test 0 = \
"$(find /sys/bus/acpi/drivers/battery/ -type l | wc -l)";
then
# No battery exists.
exit 0
fi
echo "Auto packing deferred; not on AC"

205
main.py
View File

@ -22,16 +22,27 @@ if __name__ == '__main__':
del sys.argv[-1]
del magic
import netrc
import optparse
import os
import re
import sys
import time
import urllib2
from command import InteractiveCommand, PagedCommand
from trace import SetTrace
from git_command import git, GitCommand
from git_config import init_ssh, close_ssh
from command import InteractiveCommand
from command import MirrorSafeCommand
from command import PagedCommand
from subcmds.version import Version
from editor import Editor
from error import DownloadError
from error import ManifestInvalidRevisionError
from error import NoSuchProjectError
from error import RepoChangedException
from manifest import Manifest
from manifest_xml import XmlManifest
from pager import RunPager
from subcmds import all as all_commands
@ -45,13 +56,25 @@ global_options.add_option('-p', '--paginate',
global_options.add_option('--no-pager',
dest='no_pager', action='store_true',
help='disable the pager')
global_options.add_option('--trace',
dest='trace', action='store_true',
help='trace git command execution')
global_options.add_option('--time',
dest='time', action='store_true',
help='time repo command execution')
global_options.add_option('--version',
dest='show_version', action='store_true',
help='display this version of repo')
class _Repo(object):
def __init__(self, repodir):
self.repodir = repodir
self.commands = all_commands
# add 'branch' as an alias for 'branches'
all_commands['branch'] = all_commands['branches']
def _Run(self, argv):
result = 0
name = None
glob = []
@ -68,18 +91,35 @@ class _Repo(object):
argv = []
gopts, gargs = global_options.parse_args(glob)
if gopts.trace:
SetTrace()
if gopts.show_version:
if name == 'help':
name = 'version'
else:
print >>sys.stderr, 'fatal: invalid usage of --version'
return 1
try:
cmd = self.commands[name]
except KeyError:
print >>sys.stderr,\
"repo: '%s' is not a repo command. See 'repo help'."\
% name
sys.exit(1)
return 1
cmd.repodir = self.repodir
cmd.manifest = Manifest(cmd.repodir)
cmd.manifest = XmlManifest(cmd.repodir)
Editor.globalConfig = cmd.manifest.globalConfig
if not isinstance(cmd, MirrorSafeCommand) and cmd.manifest.IsMirror:
print >>sys.stderr, \
"fatal: '%s' requires a working directory"\
% name
return 1
copts, cargs = cmd.OptionParser.parse_args(argv)
if not gopts.no_pager and not isinstance(cmd, InteractiveCommand):
config = cmd.manifest.globalConfig
if gopts.pager:
@ -87,19 +127,42 @@ class _Repo(object):
else:
use_pager = config.GetBoolean('pager.%s' % name)
if use_pager is None:
use_pager = isinstance(cmd, PagedCommand)
use_pager = cmd.WantPager(copts)
if use_pager:
RunPager(config)
copts, cargs = cmd.OptionParser.parse_args(argv)
try:
cmd.Execute(copts, cargs)
start = time.time()
try:
result = cmd.Execute(copts, cargs)
finally:
elapsed = time.time() - start
hours, remainder = divmod(elapsed, 3600)
minutes, seconds = divmod(remainder, 60)
if gopts.time:
if hours == 0:
print >>sys.stderr, 'real\t%dm%.3fs' \
% (minutes, seconds)
else:
print >>sys.stderr, 'real\t%dh%dm%.3fs' \
% (hours, minutes, seconds)
except DownloadError, e:
print >>sys.stderr, 'error: %s' % str(e)
return 1
except ManifestInvalidRevisionError, e:
print >>sys.stderr, 'error: %s' % str(e)
return 1
except NoSuchProjectError, e:
if e.name:
print >>sys.stderr, 'error: project %s not found' % e.name
else:
print >>sys.stderr, 'error: no project in current directory'
sys.exit(1)
return 1
return result
def _MyRepoPath():
return os.path.dirname(__file__)
def _MyWrapperPath():
return os.path.join(os.path.dirname(__file__), 'repo')
@ -167,7 +230,117 @@ def _PruneOptions(argv, opt):
continue
i += 1
_user_agent = None
def _UserAgent():
global _user_agent
if _user_agent is None:
py_version = sys.version_info
os_name = sys.platform
if os_name == 'linux2':
os_name = 'Linux'
elif os_name == 'win32':
os_name = 'Win32'
elif os_name == 'cygwin':
os_name = 'Cygwin'
elif os_name == 'darwin':
os_name = 'Darwin'
p = GitCommand(
None, ['describe', 'HEAD'],
cwd = _MyRepoPath(),
capture_stdout = True)
if p.Wait() == 0:
repo_version = p.stdout
if len(repo_version) > 0 and repo_version[-1] == '\n':
repo_version = repo_version[0:-1]
if len(repo_version) > 0 and repo_version[0] == 'v':
repo_version = repo_version[1:]
else:
repo_version = 'unknown'
_user_agent = 'git-repo/%s (%s) git/%s Python/%d.%d.%d' % (
repo_version,
os_name,
'.'.join(map(lambda d: str(d), git.version_tuple())),
py_version[0], py_version[1], py_version[2])
return _user_agent
class _UserAgentHandler(urllib2.BaseHandler):
def http_request(self, req):
req.add_header('User-Agent', _UserAgent())
return req
def https_request(self, req):
req.add_header('User-Agent', _UserAgent())
return req
class _BasicAuthHandler(urllib2.HTTPBasicAuthHandler):
def http_error_auth_reqed(self, authreq, host, req, headers):
try:
old_add_header = req.add_header
def _add_header(name, val):
val = val.replace('\n', '')
old_add_header(name, val)
req.add_header = _add_header
return urllib2.AbstractBasicAuthHandler.http_error_auth_reqed(
self, authreq, host, req, headers)
except:
reset = getattr(self, 'reset_retry_count', None)
if reset is not None:
reset()
elif getattr(self, 'retried', None):
self.retried = 0
raise
class _DigestAuthHandler(urllib2.HTTPDigestAuthHandler):
def http_error_auth_reqed(self, auth_header, host, req, headers):
try:
old_add_header = req.add_header
def _add_header(name, val):
val = val.replace('\n', '')
old_add_header(name, val)
req.add_header = _add_header
return urllib2.AbstractDigestAuthHandler.http_error_auth_reqed(
self, auth_header, host, req, headers)
except:
reset = getattr(self, 'reset_retry_count', None)
if reset is not None:
reset()
elif getattr(self, 'retried', None):
self.retried = 0
raise
def init_http():
handlers = [_UserAgentHandler()]
mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
try:
n = netrc.netrc()
for host in n.hosts:
p = n.hosts[host]
mgr.add_password(p[1], 'http://%s/' % host, p[0], p[2])
mgr.add_password(p[1], 'https://%s/' % host, p[0], p[2])
except netrc.NetrcParseError:
pass
except IOError:
pass
handlers.append(_BasicAuthHandler(mgr))
handlers.append(_DigestAuthHandler(mgr))
if 'http_proxy' in os.environ:
url = os.environ['http_proxy']
handlers.append(urllib2.ProxyHandler({'http': url, 'https': url}))
if 'REPO_CURL_VERBOSE' in os.environ:
handlers.append(urllib2.HTTPHandler(debuglevel=1))
handlers.append(urllib2.HTTPSHandler(debuglevel=1))
urllib2.install_opener(urllib2.build_opener(*handlers))
def _Main(argv):
result = 0
opt = optparse.OptionParser(usage="repo wrapperinfo -- ...")
opt.add_option("--repo-dir", dest="repodir",
help="path to .repo/")
@ -181,11 +354,19 @@ def _Main(argv):
_CheckWrapperVersion(opt.wrapper_version, opt.wrapper_path)
_CheckRepoDir(opt.repodir)
Version.wrapper_version = opt.wrapper_version
Version.wrapper_path = opt.wrapper_path
repo = _Repo(opt.repodir)
try:
repo._Run(argv)
try:
init_ssh()
init_http()
result = repo._Run(argv) or 0
finally:
close_ssh()
except KeyboardInterrupt:
sys.exit(1)
result = 1
except RepoChangedException, rce:
# If repo changed, re-exec ourselves.
#
@ -196,7 +377,9 @@ def _Main(argv):
except OSError, e:
print >>sys.stderr, 'fatal: cannot restart repo after upgrade'
print >>sys.stderr, 'fatal: %s' % e
sys.exit(128)
result = 128
sys.exit(result)
if __name__ == '__main__':
_Main(sys.argv[1:])

View File

@ -1,350 +0,0 @@
#
# Copyright (C) 2008 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
import xml.dom.minidom
from git_config import GitConfig, IsId
from project import Project, MetaProject, R_HEADS
from remote import Remote
from error import ManifestParseError
MANIFEST_FILE_NAME = 'manifest.xml'
LOCAL_MANIFEST_NAME = 'local_manifest.xml'
class _Default(object):
"""Project defaults within the manifest."""
revision = None
remote = None
class Manifest(object):
"""manages the repo configuration file"""
def __init__(self, repodir):
self.repodir = os.path.abspath(repodir)
self.topdir = os.path.dirname(self.repodir)
self.manifestFile = os.path.join(self.repodir, MANIFEST_FILE_NAME)
self.globalConfig = GitConfig.ForUser()
self.repoProject = MetaProject(self, 'repo',
gitdir = os.path.join(repodir, 'repo/.git'),
worktree = os.path.join(repodir, 'repo'))
self.manifestProject = MetaProject(self, 'manifests',
gitdir = os.path.join(repodir, 'manifests.git'),
worktree = os.path.join(repodir, 'manifests'))
self._Unload()
def Link(self, name):
"""Update the repo metadata to use a different manifest.
"""
path = os.path.join(self.manifestProject.worktree, name)
if not os.path.isfile(path):
raise ManifestParseError('manifest %s not found' % name)
old = self.manifestFile
try:
self.manifestFile = path
self._Unload()
self._Load()
finally:
self.manifestFile = old
try:
if os.path.exists(self.manifestFile):
os.remove(self.manifestFile)
os.symlink('manifests/%s' % name, self.manifestFile)
except OSError, e:
raise ManifestParseError('cannot link manifest %s' % name)
@property
def projects(self):
self._Load()
return self._projects
@property
def remotes(self):
self._Load()
return self._remotes
@property
def default(self):
self._Load()
return self._default
@property
def IsMirror(self):
return self.manifestProject.config.GetBoolean('repo.mirror')
def _Unload(self):
self._loaded = False
self._projects = {}
self._remotes = {}
self._default = None
self.branch = None
def _Load(self):
if not self._loaded:
m = self.manifestProject
b = m.GetBranch(m.CurrentBranch).merge
if b.startswith(R_HEADS):
b = b[len(R_HEADS):]
self.branch = b
self._ParseManifest(True)
local = os.path.join(self.repodir, LOCAL_MANIFEST_NAME)
if os.path.exists(local):
try:
real = self.manifestFile
self.manifestFile = local
self._ParseManifest(False)
finally:
self.manifestFile = real
if self.IsMirror:
self._AddMetaProjectMirror(self.repoProject)
self._AddMetaProjectMirror(self.manifestProject)
self._loaded = True
def _ParseManifest(self, is_root_file):
root = xml.dom.minidom.parse(self.manifestFile)
if not root or not root.childNodes:
raise ManifestParseError, \
"no root node in %s" % \
self.manifestFile
config = root.childNodes[0]
if config.nodeName != 'manifest':
raise ManifestParseError, \
"no <manifest> in %s" % \
self.manifestFile
for node in config.childNodes:
if node.nodeName == 'remove-project':
name = self._reqatt(node, 'name')
try:
del self._projects[name]
except KeyError:
raise ManifestParseError, \
'project %s not found' % \
(name)
for node in config.childNodes:
if node.nodeName == 'remote':
remote = self._ParseRemote(node)
if self._remotes.get(remote.name):
raise ManifestParseError, \
'duplicate remote %s in %s' % \
(remote.name, self.manifestFile)
self._remotes[remote.name] = remote
for node in config.childNodes:
if node.nodeName == 'default':
if self._default is not None:
raise ManifestParseError, \
'duplicate default in %s' % \
(self.manifestFile)
self._default = self._ParseDefault(node)
if self._default is None:
self._default = _Default()
for node in config.childNodes:
if node.nodeName == 'project':
project = self._ParseProject(node)
if self._projects.get(project.name):
raise ManifestParseError, \
'duplicate project %s in %s' % \
(project.name, self.manifestFile)
self._projects[project.name] = project
for node in config.childNodes:
if node.nodeName == 'add-remote':
pn = self._reqatt(node, 'to-project')
project = self._projects.get(pn)
if not project:
raise ManifestParseError, \
'project %s not defined in %s' % \
(pn, self.manifestFile)
self._ParseProjectExtraRemote(project, node)
def _AddMetaProjectMirror(self, m):
name = None
m_url = m.GetRemote(m.remote.name).url
if m_url.endswith('/.git'):
raise ManifestParseError, 'refusing to mirror %s' % m_url
if self._default and self._default.remote:
url = self._default.remote.fetchUrl
if not url.endswith('/'):
url += '/'
if m_url.startswith(url):
remote = self._default.remote
name = m_url[len(url):]
if name is None:
s = m_url.rindex('/') + 1
remote = Remote('origin', fetch = m_url[:s])
name = m_url[s:]
if name.endswith('.git'):
name = name[:-4]
if name not in self._projects:
m.PreSync()
gitdir = os.path.join(self.topdir, '%s.git' % name)
project = Project(manifest = self,
name = name,
remote = remote,
gitdir = gitdir,
worktree = None,
relpath = None,
revision = m.revision)
self._projects[project.name] = project
def _ParseRemote(self, node):
"""
reads a <remote> element from the manifest file
"""
name = self._reqatt(node, 'name')
fetch = self._reqatt(node, 'fetch')
review = node.getAttribute('review')
if review == '':
review = None
projectName = node.getAttribute('project-name')
if projectName == '':
projectName = None
r = Remote(name=name,
fetch=fetch,
review=review,
projectName=projectName)
for n in node.childNodes:
if n.nodeName == 'require':
r.requiredCommits.append(self._reqatt(n, 'commit'))
return r
def _ParseDefault(self, node):
"""
reads a <default> element from the manifest file
"""
d = _Default()
d.remote = self._get_remote(node)
d.revision = node.getAttribute('revision')
if d.revision == '':
d.revision = None
return d
def _ParseProject(self, node):
"""
reads a <project> element from the manifest file
"""
name = self._reqatt(node, 'name')
remote = self._get_remote(node)
if remote is None:
remote = self._default.remote
if remote is None:
raise ManifestParseError, \
"no remote for project %s within %s" % \
(name, self.manifestFile)
revision = node.getAttribute('revision')
if not revision:
revision = self._default.revision
if not revision:
raise ManifestParseError, \
"no revision for project %s within %s" % \
(name, self.manifestFile)
path = node.getAttribute('path')
if not path:
path = name
if path.startswith('/'):
raise ManifestParseError, \
"project %s path cannot be absolute in %s" % \
(name, self.manifestFile)
if self.IsMirror:
relpath = None
worktree = None
gitdir = os.path.join(self.topdir, '%s.git' % name)
else:
worktree = os.path.join(self.topdir, path)
gitdir = os.path.join(self.repodir, 'projects/%s.git' % path)
project = Project(manifest = self,
name = name,
remote = remote,
gitdir = gitdir,
worktree = worktree,
relpath = path,
revision = revision)
for n in node.childNodes:
if n.nodeName == 'remote':
self._ParseProjectExtraRemote(project, n)
elif n.nodeName == 'copyfile':
self._ParseCopyFile(project, n)
return project
def _ParseProjectExtraRemote(self, project, n):
r = self._ParseRemote(n)
if project.extraRemotes.get(r.name) \
or project.remote.name == r.name:
raise ManifestParseError, \
'duplicate remote %s in project %s in %s' % \
(r.name, project.name, self.manifestFile)
project.extraRemotes[r.name] = r
def _ParseCopyFile(self, project, node):
src = self._reqatt(node, 'src')
dest = self._reqatt(node, 'dest')
if not self.IsMirror:
# src is project relative;
# dest is relative to the top of the tree
project.AddCopyFile(src, os.path.join(self.topdir, dest))
def _get_remote(self, node):
name = node.getAttribute('remote')
if not name:
return None
v = self._remotes.get(name)
if not v:
raise ManifestParseError, \
"remote %s not defined in %s" % \
(name, self.manifestFile)
return v
def _reqatt(self, node, attname):
"""
reads a required attribute from the node.
"""
v = node.getAttribute(attname)
if not v:
raise ManifestParseError, \
"no %s in <%s> within %s" % \
(attname, node.nodeName, self.manifestFile)
return v

639
manifest_xml.py Normal file
View File

@ -0,0 +1,639 @@
#
# Copyright (C) 2008 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import os
import re
import sys
import urlparse
import xml.dom.minidom
from git_config import GitConfig, IsId
from project import RemoteSpec, Project, MetaProject, R_HEADS, HEAD
from error import ManifestParseError
MANIFEST_FILE_NAME = 'manifest.xml'
LOCAL_MANIFEST_NAME = 'local_manifest.xml'
urlparse.uses_relative.extend(['ssh', 'git'])
urlparse.uses_netloc.extend(['ssh', 'git'])
class _Default(object):
"""Project defaults within the manifest."""
revisionExpr = None
remote = None
sync_j = 1
sync_c = False
class _XmlRemote(object):
def __init__(self,
name,
fetch=None,
manifestUrl=None,
review=None):
self.name = name
self.fetchUrl = fetch
self.manifestUrl = manifestUrl
self.reviewUrl = review
self.resolvedFetchUrl = self._resolveFetchUrl()
def _resolveFetchUrl(self):
url = self.fetchUrl.rstrip('/')
manifestUrl = self.manifestUrl.rstrip('/')
# urljoin will get confused if there is no scheme in the base url
# ie, if manifestUrl is of the form <hostname:port>
if manifestUrl.find(':') != manifestUrl.find('/') - 1:
manifestUrl = 'gopher://' + manifestUrl
url = urlparse.urljoin(manifestUrl, url)
return re.sub(r'^gopher://', '', url)
def ToRemoteSpec(self, projectName):
url = self.resolvedFetchUrl.rstrip('/') + '/' + projectName
return RemoteSpec(self.name, url, self.reviewUrl)
class XmlManifest(object):
"""manages the repo configuration file"""
def __init__(self, repodir):
self.repodir = os.path.abspath(repodir)
self.topdir = os.path.dirname(self.repodir)
self.manifestFile = os.path.join(self.repodir, MANIFEST_FILE_NAME)
self.globalConfig = GitConfig.ForUser()
self.repoProject = MetaProject(self, 'repo',
gitdir = os.path.join(repodir, 'repo/.git'),
worktree = os.path.join(repodir, 'repo'))
self.manifestProject = MetaProject(self, 'manifests',
gitdir = os.path.join(repodir, 'manifests.git'),
worktree = os.path.join(repodir, 'manifests'))
self._Unload()
def Override(self, name):
"""Use a different manifest, just for the current instantiation.
"""
path = os.path.join(self.manifestProject.worktree, name)
if not os.path.isfile(path):
raise ManifestParseError('manifest %s not found' % name)
old = self.manifestFile
try:
self.manifestFile = path
self._Unload()
self._Load()
finally:
self.manifestFile = old
def Link(self, name):
"""Update the repo metadata to use a different manifest.
"""
self.Override(name)
try:
if os.path.exists(self.manifestFile):
os.remove(self.manifestFile)
os.symlink('manifests/%s' % name, self.manifestFile)
except OSError, e:
raise ManifestParseError('cannot link manifest %s' % name)
def _RemoteToXml(self, r, doc, root):
e = doc.createElement('remote')
root.appendChild(e)
e.setAttribute('name', r.name)
e.setAttribute('fetch', r.fetchUrl)
if r.reviewUrl is not None:
e.setAttribute('review', r.reviewUrl)
def Save(self, fd, peg_rev=False):
"""Write the current manifest out to the given file descriptor.
"""
mp = self.manifestProject
groups = mp.config.GetString('manifest.groups')
if not groups:
groups = 'default'
groups = [x for x in re.split(r'[,\s]+', groups) if x]
doc = xml.dom.minidom.Document()
root = doc.createElement('manifest')
doc.appendChild(root)
# Save out the notice. There's a little bit of work here to give it the
# right whitespace, which assumes that the notice is automatically indented
# by 4 by minidom.
if self.notice:
notice_element = root.appendChild(doc.createElement('notice'))
notice_lines = self.notice.splitlines()
indented_notice = ('\n'.join(" "*4 + line for line in notice_lines))[4:]
notice_element.appendChild(doc.createTextNode(indented_notice))
d = self.default
sort_remotes = list(self.remotes.keys())
sort_remotes.sort()
for r in sort_remotes:
self._RemoteToXml(self.remotes[r], doc, root)
if self.remotes:
root.appendChild(doc.createTextNode(''))
have_default = False
e = doc.createElement('default')
if d.remote:
have_default = True
e.setAttribute('remote', d.remote.name)
if d.revisionExpr:
have_default = True
e.setAttribute('revision', d.revisionExpr)
if d.sync_j > 1:
have_default = True
e.setAttribute('sync-j', '%d' % d.sync_j)
if d.sync_c:
have_default = True
e.setAttribute('sync-c', 'true')
if have_default:
root.appendChild(e)
root.appendChild(doc.createTextNode(''))
if self._manifest_server:
e = doc.createElement('manifest-server')
e.setAttribute('url', self._manifest_server)
root.appendChild(e)
root.appendChild(doc.createTextNode(''))
sort_projects = list(self.projects.keys())
sort_projects.sort()
for p in sort_projects:
p = self.projects[p]
if not p.MatchesGroups(groups):
continue
e = doc.createElement('project')
root.appendChild(e)
e.setAttribute('name', p.name)
if p.relpath != p.name:
e.setAttribute('path', p.relpath)
if not d.remote or p.remote.name != d.remote.name:
e.setAttribute('remote', p.remote.name)
if peg_rev:
if self.IsMirror:
e.setAttribute('revision',
p.bare_git.rev_parse(p.revisionExpr + '^0'))
else:
e.setAttribute('revision',
p.work_git.rev_parse(HEAD + '^0'))
elif not d.revisionExpr or p.revisionExpr != d.revisionExpr:
e.setAttribute('revision', p.revisionExpr)
for c in p.copyfiles:
ce = doc.createElement('copyfile')
ce.setAttribute('src', c.src)
ce.setAttribute('dest', c.dest)
e.appendChild(ce)
egroups = [g for g in p.groups if g != 'default']
if egroups:
e.setAttribute('groups', ','.join(egroups))
for a in p.annotations:
if a.keep == "true":
ae = doc.createElement('annotation')
ae.setAttribute('name', a.name)
ae.setAttribute('value', a.value)
e.appendChild(ae)
if p.sync_c:
e.setAttribute('sync-c', 'true')
if self._repo_hooks_project:
root.appendChild(doc.createTextNode(''))
e = doc.createElement('repo-hooks')
e.setAttribute('in-project', self._repo_hooks_project.name)
e.setAttribute('enabled-list',
' '.join(self._repo_hooks_project.enabled_repo_hooks))
root.appendChild(e)
doc.writexml(fd, '', ' ', '\n', 'UTF-8')
@property
def projects(self):
self._Load()
return self._projects
@property
def remotes(self):
self._Load()
return self._remotes
@property
def default(self):
self._Load()
return self._default
@property
def repo_hooks_project(self):
self._Load()
return self._repo_hooks_project
@property
def notice(self):
self._Load()
return self._notice
@property
def manifest_server(self):
self._Load()
return self._manifest_server
@property
def IsMirror(self):
return self.manifestProject.config.GetBoolean('repo.mirror')
def _Unload(self):
self._loaded = False
self._projects = {}
self._remotes = {}
self._default = None
self._repo_hooks_project = None
self._notice = None
self.branch = None
self._manifest_server = None
def _Load(self):
if not self._loaded:
m = self.manifestProject
b = m.GetBranch(m.CurrentBranch).merge
if b is not None and b.startswith(R_HEADS):
b = b[len(R_HEADS):]
self.branch = b
nodes = []
nodes.append(self._ParseManifestXml(self.manifestFile))
local = os.path.join(self.repodir, LOCAL_MANIFEST_NAME)
if os.path.exists(local):
nodes.append(self._ParseManifestXml(local))
self._ParseManifest(nodes)
if self.IsMirror:
self._AddMetaProjectMirror(self.repoProject)
self._AddMetaProjectMirror(self.manifestProject)
self._loaded = True
def _ParseManifestXml(self, path):
root = xml.dom.minidom.parse(path)
if not root or not root.childNodes:
raise ManifestParseError("no root node in %s" % (path,))
config = root.childNodes[0]
if config.nodeName != 'manifest':
raise ManifestParseError("no <manifest> in %s" % (path,))
nodes = []
for node in config.childNodes:
if node.nodeName == 'include':
name = self._reqatt(node, 'name')
fp = os.path.join(os.path.dirname(path), name)
if not os.path.isfile(fp):
raise ManifestParseError, \
"include %s doesn't exist or isn't a file" % \
(name,)
try:
nodes.extend(self._ParseManifestXml(fp))
# should isolate this to the exact exception, but that's
# tricky. actual parsing implementation may vary.
except (KeyboardInterrupt, RuntimeError, SystemExit):
raise
except Exception, e:
raise ManifestParseError(
"failed parsing included manifest %s: %s", (name, e))
else:
nodes.append(node)
return nodes
def _ParseManifest(self, node_list):
for node in itertools.chain(*node_list):
if node.nodeName == 'remote':
remote = self._ParseRemote(node)
if self._remotes.get(remote.name):
raise ManifestParseError(
'duplicate remote %s in %s' %
(remote.name, self.manifestFile))
self._remotes[remote.name] = remote
for node in itertools.chain(*node_list):
if node.nodeName == 'default':
if self._default is not None:
raise ManifestParseError(
'duplicate default in %s' %
(self.manifestFile))
self._default = self._ParseDefault(node)
if self._default is None:
self._default = _Default()
for node in itertools.chain(*node_list):
if node.nodeName == 'notice':
if self._notice is not None:
raise ManifestParseError(
'duplicate notice in %s' %
(self.manifestFile))
self._notice = self._ParseNotice(node)
for node in itertools.chain(*node_list):
if node.nodeName == 'manifest-server':
url = self._reqatt(node, 'url')
if self._manifest_server is not None:
raise ManifestParseError(
'duplicate manifest-server in %s' %
(self.manifestFile))
self._manifest_server = url
for node in itertools.chain(*node_list):
if node.nodeName == 'project':
project = self._ParseProject(node)
if self._projects.get(project.name):
raise ManifestParseError(
'duplicate project %s in %s' %
(project.name, self.manifestFile))
self._projects[project.name] = project
if node.nodeName == 'repo-hooks':
# Get the name of the project and the (space-separated) list of enabled.
repo_hooks_project = self._reqatt(node, 'in-project')
enabled_repo_hooks = self._reqatt(node, 'enabled-list').split()
# Only one project can be the hooks project
if self._repo_hooks_project is not None:
raise ManifestParseError(
'duplicate repo-hooks in %s' %
(self.manifestFile))
# Store a reference to the Project.
try:
self._repo_hooks_project = self._projects[repo_hooks_project]
except KeyError:
raise ManifestParseError(
'project %s not found for repo-hooks' %
(repo_hooks_project))
# Store the enabled hooks in the Project object.
self._repo_hooks_project.enabled_repo_hooks = enabled_repo_hooks
if node.nodeName == 'remove-project':
name = self._reqatt(node, 'name')
try:
del self._projects[name]
except KeyError:
raise ManifestParseError(
'project %s not found' %
(name))
# If the manifest removes the hooks project, treat it as if it deleted
# the repo-hooks element too.
if self._repo_hooks_project and (self._repo_hooks_project.name == name):
self._repo_hooks_project = None
def _AddMetaProjectMirror(self, m):
name = None
m_url = m.GetRemote(m.remote.name).url
if m_url.endswith('/.git'):
raise ManifestParseError, 'refusing to mirror %s' % m_url
if self._default and self._default.remote:
url = self._default.remote.resolvedFetchUrl
if not url.endswith('/'):
url += '/'
if m_url.startswith(url):
remote = self._default.remote
name = m_url[len(url):]
if name is None:
s = m_url.rindex('/') + 1
manifestUrl = self.manifestProject.config.GetString('remote.origin.url')
remote = _XmlRemote('origin', m_url[:s], manifestUrl)
name = m_url[s:]
if name.endswith('.git'):
name = name[:-4]
if name not in self._projects:
m.PreSync()
gitdir = os.path.join(self.topdir, '%s.git' % name)
project = Project(manifest = self,
name = name,
remote = remote.ToRemoteSpec(name),
gitdir = gitdir,
worktree = None,
relpath = None,
revisionExpr = m.revisionExpr,
revisionId = None)
self._projects[project.name] = project
def _ParseRemote(self, node):
"""
reads a <remote> element from the manifest file
"""
name = self._reqatt(node, 'name')
fetch = self._reqatt(node, 'fetch')
review = node.getAttribute('review')
if review == '':
review = None
manifestUrl = self.manifestProject.config.GetString('remote.origin.url')
return _XmlRemote(name, fetch, manifestUrl, review)
def _ParseDefault(self, node):
"""
reads a <default> element from the manifest file
"""
d = _Default()
d.remote = self._get_remote(node)
d.revisionExpr = node.getAttribute('revision')
if d.revisionExpr == '':
d.revisionExpr = None
sync_j = node.getAttribute('sync-j')
if sync_j == '' or sync_j is None:
d.sync_j = 1
else:
d.sync_j = int(sync_j)
sync_c = node.getAttribute('sync-c')
if not sync_c:
d.sync_c = False
else:
d.sync_c = sync_c.lower() in ("yes", "true", "1")
return d
def _ParseNotice(self, node):
"""
reads a <notice> element from the manifest file
The <notice> element is distinct from other tags in the XML in that the
data is conveyed between the start and end tag (it's not an empty-element
tag).
The white space (carriage returns, indentation) for the notice element is
relevant and is parsed in a way that is based on how python docstrings work.
In fact, the code is remarkably similar to here:
http://www.python.org/dev/peps/pep-0257/
"""
# Get the data out of the node...
notice = node.childNodes[0].data
# Figure out minimum indentation, skipping the first line (the same line
# as the <notice> tag)...
minIndent = sys.maxint
lines = notice.splitlines()
for line in lines[1:]:
lstrippedLine = line.lstrip()
if lstrippedLine:
indent = len(line) - len(lstrippedLine)
minIndent = min(indent, minIndent)
# Strip leading / trailing blank lines and also indentation.
cleanLines = [lines[0].strip()]
for line in lines[1:]:
cleanLines.append(line[minIndent:].rstrip())
# Clear completely blank lines from front and back...
while cleanLines and not cleanLines[0]:
del cleanLines[0]
while cleanLines and not cleanLines[-1]:
del cleanLines[-1]
return '\n'.join(cleanLines)
def _ParseProject(self, node):
"""
reads a <project> element from the manifest file
"""
name = self._reqatt(node, 'name')
remote = self._get_remote(node)
if remote is None:
remote = self._default.remote
if remote is None:
raise ManifestParseError, \
"no remote for project %s within %s" % \
(name, self.manifestFile)
revisionExpr = node.getAttribute('revision')
if not revisionExpr:
revisionExpr = self._default.revisionExpr
if not revisionExpr:
raise ManifestParseError, \
"no revision for project %s within %s" % \
(name, self.manifestFile)
path = node.getAttribute('path')
if not path:
path = name
if path.startswith('/'):
raise ManifestParseError, \
"project %s path cannot be absolute in %s" % \
(name, self.manifestFile)
rebase = node.getAttribute('rebase')
if not rebase:
rebase = True
else:
rebase = rebase.lower() in ("yes", "true", "1")
sync_c = node.getAttribute('sync-c')
if not sync_c:
sync_c = False
else:
sync_c = sync_c.lower() in ("yes", "true", "1")
groups = ''
if node.hasAttribute('groups'):
groups = node.getAttribute('groups')
groups = [x for x in re.split('[,\s]+', groups) if x]
if 'default' not in groups:
groups.append('default')
if self.IsMirror:
relpath = None
worktree = None
gitdir = os.path.join(self.topdir, '%s.git' % name)
else:
worktree = os.path.join(self.topdir, path).replace('\\', '/')
gitdir = os.path.join(self.repodir, 'projects/%s.git' % path)
project = Project(manifest = self,
name = name,
remote = remote.ToRemoteSpec(name),
gitdir = gitdir,
worktree = worktree,
relpath = path,
revisionExpr = revisionExpr,
revisionId = None,
rebase = rebase,
groups = groups,
sync_c = sync_c)
for n in node.childNodes:
if n.nodeName == 'copyfile':
self._ParseCopyFile(project, n)
if n.nodeName == 'annotation':
self._ParseAnnotation(project, n)
return project
def _ParseCopyFile(self, project, node):
src = self._reqatt(node, 'src')
dest = self._reqatt(node, 'dest')
if not self.IsMirror:
# src is project relative;
# dest is relative to the top of the tree
project.AddCopyFile(src, dest, os.path.join(self.topdir, dest))
def _ParseAnnotation(self, project, node):
name = self._reqatt(node, 'name')
value = self._reqatt(node, 'value')
try:
keep = self._reqatt(node, 'keep').lower()
except ManifestParseError:
keep = "true"
if keep != "true" and keep != "false":
raise ManifestParseError, "optional \"keep\" attribute must be \"true\" or \"false\""
project.AddAnnotation(name, value, keep)
def _get_remote(self, node):
name = node.getAttribute('remote')
if not name:
return None
v = self._remotes.get(name)
if not v:
raise ManifestParseError, \
"remote %s not defined in %s" % \
(name, self.manifestFile)
return v
def _reqatt(self, node, attname):
"""
reads a required attribute from the node.
"""
v = node.getAttribute(attname)
if not v:
raise ManifestParseError, \
"no %s in <%s> within %s" % \
(attname, node.nodeName, self.manifestFile)
return v

View File

@ -22,7 +22,7 @@ active = False
def RunPager(globalConfig):
global active
if not os.isatty(0):
if not os.isatty(0) or not os.isatty(1):
return
pager = _SelectPager(globalConfig)
if pager == '' or pager == 'cat':

78
progress.py Normal file
View File

@ -0,0 +1,78 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
from time import time
from trace import IsTrace
_NOT_TTY = not os.isatty(2)
class Progress(object):
def __init__(self, title, total=0, units=''):
self._title = title
self._total = total
self._done = 0
self._lastp = -1
self._start = time()
self._show = False
self._units = units
def update(self, inc=1):
self._done += inc
if _NOT_TTY or IsTrace():
return
if not self._show:
if 0.5 <= time() - self._start:
self._show = True
else:
return
if self._total <= 0:
sys.stderr.write('\r%s: %d, ' % (
self._title,
self._done))
sys.stderr.flush()
else:
p = (100 * self._done) / self._total
if self._lastp != p:
self._lastp = p
sys.stderr.write('\r%s: %3d%% (%d%s/%d%s) ' % (
self._title,
p,
self._done, self._units,
self._total, self._units))
sys.stderr.flush()
def end(self):
if _NOT_TTY or IsTrace() or not self._show:
return
if self._total <= 0:
sys.stderr.write('\r%s: %d, done. \n' % (
self._title,
self._done))
sys.stderr.flush()
else:
p = (100 * self._done) / self._total
sys.stderr.write('\r%s: %3d%% (%d%s/%d%s), done. \n' % (
self._title,
p,
self._done, self._units,
self._total, self._units))
sys.stderr.flush()

1697
project.py

File diff suppressed because it is too large Load Diff

161
repo
View File

@ -2,7 +2,7 @@
## repo default configuration
##
REPO_URL='git://android.git.kernel.org/tools/repo.git'
REPO_URL='https://gerrit.googlesource.com/git-repo'
REPO_REV='stable'
# Copyright (C) 2008 Google Inc.
@ -28,7 +28,7 @@ if __name__ == '__main__':
del magic
# increment this whenever we make important changes to this script
VERSION = (1, 8)
VERSION = (1, 17)
# increment this if the MAINTAINER_KEYS block is modified
KEYRING_VERSION = (1,0)
@ -91,6 +91,7 @@ import re
import readline
import subprocess
import sys
import urllib2
home_dot_repo = os.path.expanduser('~/.repoconfig')
gpg_dir = os.path.join(home_dot_repo, 'gnupg')
@ -118,9 +119,25 @@ group.add_option('-m', '--manifest-name',
group.add_option('--mirror',
dest='mirror', action='store_true',
help='mirror the forrest')
group.add_option('--reference',
dest='reference',
help='location of mirror directory', metavar='DIR')
group.add_option('--depth', type='int', default=None,
dest='depth',
help='create a shallow clone with given depth; see git clone')
group.add_option('-g', '--groups',
dest='groups', default='default',
help='restrict manifest projects to ones with a specified group',
metavar='GROUP')
group.add_option('-p', '--platform',
dest='platform', default="auto",
help='restrict manifest projects to ones with a specified'
'platform group [auto|all|none|linux|darwin|...]',
metavar='PLATFORM')
# Tool
group = init_optparse.add_option_group('Version options')
group = init_optparse.add_option_group('repo Version options')
group.add_option('--repo-url',
dest='repo_url',
help='repo repository location', metavar='URL')
@ -131,6 +148,11 @@ group.add_option('--no-repo-verify',
dest='no_repo_verify', action='store_true',
help='do not verify repo source code')
# Other
group = init_optparse.add_option_group('Other options')
group.add_option('--config-name',
dest='config_name', action="store_true", default=False,
help='Always prompt for name/e-mail')
class CloneFailure(Exception):
"""Indicate the remote clone of repo itself failed.
@ -141,7 +163,7 @@ def _Init(args):
"""Installs repo by cloning it over the network.
"""
opt, args = init_optparse.parse_args(args)
if args or not opt.manifest_url:
if args:
init_optparse.print_usage()
sys.exit(1)
@ -180,10 +202,6 @@ def _Init(args):
else:
can_verify = True
if not opt.quiet:
print >>sys.stderr, 'Getting repo ...'
print >>sys.stderr, ' from %s' % url
dst = os.path.abspath(os.path.join(repodir, S_repo))
_Clone(url, dst, opt.quiet)
@ -202,7 +220,17 @@ def _Init(args):
def _CheckGitVersion():
cmd = [GIT, '--version']
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
try:
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
except OSError, e:
print >>sys.stderr
print >>sys.stderr, "fatal: '%s' is not available" % GIT
print >>sys.stderr, 'fatal: %s' % e
print >>sys.stderr
print >>sys.stderr, 'Please make sure %s is installed'\
' and in your path.' % GIT
raise CloneFailure()
ver_str = proc.stdout.read().strip()
proc.stdout.close()
proc.wait()
@ -256,8 +284,8 @@ def _SetupGnuPG(quiet):
gpg_dir, e.strerror)
sys.exit(1)
env = dict(os.environ)
env['GNUPGHOME'] = gpg_dir
env = os.environ.copy()
env['GNUPGHOME'] = gpg_dir.encode()
cmd = ['gpg', '--import']
try:
@ -293,15 +321,43 @@ def _SetConfig(local, name, value):
raise CloneFailure()
def _Fetch(local, quiet, *args):
def _InitHttp():
handlers = []
mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
try:
import netrc
n = netrc.netrc()
for host in n.hosts:
p = n.hosts[host]
mgr.add_password(p[1], 'http://%s/' % host, p[0], p[2])
mgr.add_password(p[1], 'https://%s/' % host, p[0], p[2])
except:
pass
handlers.append(urllib2.HTTPBasicAuthHandler(mgr))
handlers.append(urllib2.HTTPDigestAuthHandler(mgr))
if 'http_proxy' in os.environ:
url = os.environ['http_proxy']
handlers.append(urllib2.ProxyHandler({'http': url, 'https': url}))
if 'REPO_CURL_VERBOSE' in os.environ:
handlers.append(urllib2.HTTPHandler(debuglevel=1))
handlers.append(urllib2.HTTPSHandler(debuglevel=1))
urllib2.install_opener(urllib2.build_opener(*handlers))
def _Fetch(url, local, src, quiet):
if not quiet:
print >>sys.stderr, 'Get %s' % url
cmd = [GIT, 'fetch']
if quiet:
cmd.append('--quiet')
err = subprocess.PIPE
else:
err = None
cmd.extend(args)
cmd.append('origin')
cmd.append(src)
cmd.append('+refs/heads/*:refs/remotes/origin/*')
cmd.append('refs/tags/*:refs/tags/*')
proc = subprocess.Popen(cmd, cwd = local, stderr = err)
if err:
@ -310,6 +366,62 @@ def _Fetch(local, quiet, *args):
if proc.wait() != 0:
raise CloneFailure()
def _DownloadBundle(url, local, quiet):
if not url.endswith('/'):
url += '/'
url += 'clone.bundle'
proc = subprocess.Popen(
[GIT, 'config', '--get-regexp', 'url.*.insteadof'],
cwd = local,
stdout = subprocess.PIPE)
for line in proc.stdout:
m = re.compile(r'^url\.(.*)\.insteadof (.*)$').match(line)
if m:
new_url = m.group(1)
old_url = m.group(2)
if url.startswith(old_url):
url = new_url + url[len(old_url):]
break
proc.stdout.close()
proc.wait()
if not url.startswith('http:') and not url.startswith('https:'):
return False
dest = open(os.path.join(local, '.git', 'clone.bundle'), 'w+b')
try:
try:
r = urllib2.urlopen(url)
except urllib2.HTTPError, e:
if e.code == 404:
return False
print >>sys.stderr, 'fatal: Cannot get %s' % url
print >>sys.stderr, 'fatal: HTTP error %s' % e.code
raise CloneFailure()
except urllib2.URLError, e:
print >>sys.stderr, 'fatal: Cannot get %s' % url
print >>sys.stderr, 'fatal: error %s' % e.reason
raise CloneFailure()
try:
if not quiet:
print >>sys.stderr, 'Get %s' % url
while True:
buf = r.read(8192)
if buf == '':
return True
dest.write(buf)
finally:
r.close()
finally:
dest.close()
def _ImportBundle(local):
path = os.path.join(local, '.git', 'clone.bundle')
try:
_Fetch(local, local, path, True)
finally:
os.remove(path)
def _Clone(url, local, quiet):
"""Clones a git repository to a new subdirectory of repodir
@ -337,11 +449,14 @@ def _Clone(url, local, quiet):
print >>sys.stderr, 'fatal: could not create %s' % local
raise CloneFailure()
_InitHttp()
_SetConfig(local, 'remote.origin.url', url)
_SetConfig(local, 'remote.origin.fetch',
'+refs/heads/*:refs/remotes/origin/*')
_Fetch(local, quiet)
_Fetch(local, quiet, '--tags')
if _DownloadBundle(url, local, quiet):
_ImportBundle(local)
else:
_Fetch(url, local, 'origin', quiet)
def _Verify(cwd, branch, quiet):
@ -375,8 +490,8 @@ def _Verify(cwd, branch, quiet):
% (branch, cur)
print >>sys.stderr
env = dict(os.environ)
env['GNUPGHOME'] = gpg_dir
env = os.environ.copy()
env['GNUPGHOME'] = gpg_dir.encode()
cmd = [GIT, 'tag', '-v', cur]
proc = subprocess.Popen(cmd,
@ -427,10 +542,14 @@ def _FindRepo():
dir = os.getcwd()
repo = None
while dir != '/' and not repo:
olddir = None
while dir != '/' \
and dir != olddir \
and not repo:
repo = os.path.join(dir, repodir, REPO_MAIN)
if not os.path.isfile(repo):
repo = None
olddir = dir
dir = os.path.dirname(dir)
return (repo, os.path.join(dir, repodir))
@ -476,6 +595,7 @@ def _Help(args):
if args:
if args[0] == 'init':
init_optparse.print_help()
sys.exit(0)
else:
print >>sys.stderr,\
"error: '%s' is not a bootstrap command.\n"\
@ -505,7 +625,7 @@ def _RunSelf(wrapper_path):
my_git = os.path.join(my_dir, '.git')
if os.path.isfile(my_main) and os.path.isdir(my_git):
for name in ['manifest.py',
for name in ['git_config.py',
'project.py',
'subcmds']:
if not os.path.exists(os.path.join(my_dir, name)):
@ -588,4 +708,3 @@ def main(orig_args):
if __name__ == '__main__':
main(sys.argv[1:])

View File

@ -16,6 +16,7 @@
import sys
from command import Command
from git_command import git
from progress import Progress
class Abandon(Command):
common = True
@ -38,5 +39,32 @@ It is equivalent to "git branch -D <branchname>".
print >>sys.stderr, "error: '%s' is not a valid name" % nb
sys.exit(1)
for project in self.GetProjects(args[1:]):
project.AbandonBranch(nb)
nb = args[0]
err = []
success = []
all = self.GetProjects(args[1:])
pm = Progress('Abandon %s' % nb, len(all))
for project in all:
pm.update()
status = project.AbandonBranch(nb)
if status is not None:
if status:
success.append(project)
else:
err.append(project)
pm.end()
if err:
for p in err:
print >>sys.stderr,\
"error: %s/: cannot abandon %s" \
% (p.relpath, nb)
sys.exit(1)
elif not success:
print >>sys.stderr, 'error: no project has branch %s' % nb
sys.exit(1)
else:
print >>sys.stderr, 'Abandoned in %d project(s):\n %s' % (
len(success), '\n '.join(p.relpath for p in success))

166
subcmds/branches.py Normal file
View File

@ -0,0 +1,166 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from color import Coloring
from command import Command
class BranchColoring(Coloring):
def __init__(self, config):
Coloring.__init__(self, config, 'branch')
self.current = self.printer('current', fg='green')
self.local = self.printer('local')
self.notinproject = self.printer('notinproject', fg='red')
class BranchInfo(object):
def __init__(self, name):
self.name = name
self.current = 0
self.published = 0
self.published_equal = 0
self.projects = []
def add(self, b):
if b.current:
self.current += 1
if b.published:
self.published += 1
if b.revision == b.published:
self.published_equal += 1
self.projects.append(b)
@property
def IsCurrent(self):
return self.current > 0
@property
def IsPublished(self):
return self.published > 0
@property
def IsPublishedEqual(self):
return self.published_equal == len(self.projects)
class Branches(Command):
common = True
helpSummary = "View current topic branches"
helpUsage = """
%prog [<project>...]
Summarizes the currently available topic branches.
Branch Display
--------------
The branch display output by this command is organized into four
columns of information; for example:
*P nocolor | in repo
repo2 |
The first column contains a * if the branch is the currently
checked out branch in any of the specified projects, or a blank
if no project has the branch checked out.
The second column contains either blank, p or P, depending upon
the upload status of the branch.
(blank): branch not yet published by repo upload
P: all commits were published by repo upload
p: only some commits were published by repo upload
The third column contains the branch name.
The fourth column (after the | separator) lists the projects that
the branch appears in, or does not appear in. If no project list
is shown, then the branch appears in all projects.
"""
def Execute(self, opt, args):
projects = self.GetProjects(args)
out = BranchColoring(self.manifest.manifestProject.config)
all = {}
project_cnt = len(projects)
for project in projects:
for name, b in project.GetBranches().iteritems():
b.project = project
if name not in all:
all[name] = BranchInfo(name)
all[name].add(b)
names = all.keys()
names.sort()
if not names:
print >>sys.stderr, ' (no branches)'
return
width = 25
for name in names:
if width < len(name):
width = len(name)
for name in names:
i = all[name]
in_cnt = len(i.projects)
if i.IsCurrent:
current = '*'
hdr = out.current
else:
current = ' '
hdr = out.local
if i.IsPublishedEqual:
published = 'P'
elif i.IsPublished:
published = 'p'
else:
published = ' '
hdr('%c%c %-*s' % (current, published, width, name))
out.write(' |')
if in_cnt < project_cnt:
fmt = out.write
paths = []
if in_cnt < project_cnt - in_cnt:
type = 'in'
for b in i.projects:
paths.append(b.project.relpath)
else:
fmt = out.notinproject
type = 'not in'
have = set()
for b in i.projects:
have.add(b.project)
for p in projects:
if not p in have:
paths.append(p.relpath)
s = ' %s %s' % (type, ', '.join(paths))
if width + 7 + len(s) < 80:
fmt(s)
else:
fmt(' %s:' % type)
for p in paths:
out.nl()
fmt(width*' ' + ' %s' % p)
else:
out.write(' in all projects')
out.nl()

64
subcmds/checkout.py Normal file
View File

@ -0,0 +1,64 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from command import Command
from progress import Progress
class Checkout(Command):
common = True
helpSummary = "Checkout a branch for development"
helpUsage = """
%prog <branchname> [<project>...]
"""
helpDescription = """
The '%prog' command checks out an existing branch that was previously
created by 'repo start'.
The command is equivalent to:
repo forall [<project>...] -c git checkout <branchname>
"""
def Execute(self, opt, args):
if not args:
self.Usage()
nb = args[0]
err = []
success = []
all = self.GetProjects(args[1:])
pm = Progress('Checkout %s' % nb, len(all))
for project in all:
pm.update()
status = project.CheckoutBranch(nb)
if status is not None:
if status:
success.append(project)
else:
err.append(project)
pm.end()
if err:
for p in err:
print >>sys.stderr,\
"error: %s/: cannot checkout %s" \
% (p.relpath, nb)
sys.exit(1)
elif not success:
print >>sys.stderr, 'error: no project has branch %s' % nb
sys.exit(1)

114
subcmds/cherry_pick.py Normal file
View File

@ -0,0 +1,114 @@
#
# Copyright (C) 2010 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys, re, string, random, os
from command import Command
from git_command import GitCommand
CHANGE_ID_RE = re.compile(r'^\s*Change-Id: I([0-9a-f]{40})\s*$')
class CherryPick(Command):
common = True
helpSummary = "Cherry-pick a change."
helpUsage = """
%prog <sha1>
"""
helpDescription = """
'%prog' cherry-picks a change from one branch to another.
The change id will be updated, and a reference to the old
change id will be added.
"""
def _Options(self, p):
pass
def Execute(self, opt, args):
if len(args) != 1:
self.Usage()
reference = args[0]
p = GitCommand(None,
['rev-parse', '--verify', reference],
capture_stdout = True,
capture_stderr = True)
if p.Wait() != 0:
print >>sys.stderr, p.stderr
sys.exit(1)
sha1 = p.stdout.strip()
p = GitCommand(None, ['cat-file', 'commit', sha1], capture_stdout=True)
if p.Wait() != 0:
print >>sys.stderr, "error: Failed to retrieve old commit message"
sys.exit(1)
old_msg = self._StripHeader(p.stdout)
p = GitCommand(None,
['cherry-pick', sha1],
capture_stdout = True,
capture_stderr = True)
status = p.Wait()
print >>sys.stdout, p.stdout
print >>sys.stderr, p.stderr
if status == 0:
# The cherry-pick was applied correctly. We just need to edit the
# commit message.
new_msg = self._Reformat(old_msg, sha1)
p = GitCommand(None, ['commit', '--amend', '-F', '-'],
provide_stdin = True,
capture_stdout = True,
capture_stderr = True)
p.stdin.write(new_msg)
if p.Wait() != 0:
print >>sys.stderr, "error: Failed to update commit message"
sys.exit(1)
else:
print >>sys.stderr, """\
NOTE: When committing (please see above) and editing the commit message,
please remove the old Change-Id-line and add:
"""
print >>sys.stderr, self._GetReference(sha1)
print >>sys.stderr
def _IsChangeId(self, line):
return CHANGE_ID_RE.match(line)
def _GetReference(self, sha1):
return "(cherry picked from commit %s)" % sha1
def _StripHeader(self, commit_msg):
lines = commit_msg.splitlines()
return "\n".join(lines[lines.index("")+1:])
def _Reformat(self, old_msg, sha1):
new_msg = []
for line in old_msg.splitlines():
if not self._IsChangeId(line):
new_msg.append(line)
# Add a blank line between the message and the change id/reference
try:
if new_msg[-1].strip() != "":
new_msg.append("")
except IndexError:
pass
new_msg.append(self._GetReference(sha1))
return "\n".join(new_msg)

View File

@ -20,8 +20,21 @@ class Diff(PagedCommand):
helpSummary = "Show changes between commit and working tree"
helpUsage = """
%prog [<project>...]
The -u option causes '%prog' to generate diff output with file paths
relative to the repository root, so the output can be applied
to the Unix 'patch' command.
"""
def _Options(self, p):
def cmd(option, opt_str, value, parser):
setattr(parser.values, option.dest, list(parser.rargs))
while parser.rargs:
del parser.rargs[0]
p.add_option('-u', '--absolute',
dest='absolute', action='store_true',
help='Paths are relative to the repository root')
def Execute(self, opt, args):
for project in self.GetProjects(args):
project.PrintWorkTreeDiff()
project.PrintWorkTreeDiff(opt.absolute)

View File

@ -33,9 +33,20 @@ makes it available in your project's local working directory.
"""
def _Options(self, p):
pass
p.add_option('-c','--cherry-pick',
dest='cherrypick', action='store_true',
help="cherry-pick instead of checkout")
p.add_option('-r','--revert',
dest='revert', action='store_true',
help="revert instead of checkout")
p.add_option('-f','--ff-only',
dest='ffonly', action='store_true',
help="force fast-forward merge")
def _ParseChangeIds(self, args):
if not args:
self.Usage()
to_get = []
project = None
@ -63,7 +74,7 @@ makes it available in your project's local working directory.
% (project.name, change_id, ps_id)
sys.exit(1)
if not dl.commits:
if not opt.revert and not dl.commits:
print >>sys.stderr, \
'[%s] change %d/%d has already been merged' \
% (project.name, change_id, ps_id)
@ -75,4 +86,11 @@ makes it available in your project's local working directory.
% (project.name, change_id, ps_id, len(dl.commits))
for c in dl.commits:
print >>sys.stderr, ' %s' % (c)
project._Checkout(dl.commit)
if opt.cherrypick:
project._CherryPick(dl.commit)
elif opt.revert:
project._Revert(dl.commit)
elif opt.ffonly:
project._FastForward(dl.commit, ffonly=True)
else:
project._Checkout(dl.commit)

View File

@ -13,13 +13,30 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import fcntl
import re
import os
import select
import sys
import subprocess
from command import Command
class Forall(Command):
from color import Coloring
from command import Command, MirrorSafeCommand
_CAN_COLOR = [
'branch',
'diff',
'grep',
'log',
]
class ForallColoring(Coloring):
def __init__(self, config):
Coloring.__init__(self, config, 'forall')
self.project = self.printer('project', attr='bold')
class Forall(Command, MirrorSafeCommand):
common = False
helpSummary = "Run a shell command in each project"
helpUsage = """
@ -28,17 +45,53 @@ class Forall(Command):
helpDescription = """
Executes the same shell command in each project.
Output Formatting
-----------------
The -p option causes '%prog' to bind pipes to the command's stdin,
stdout and stderr streams, and pipe all output into a continuous
stream that is displayed in a single pager session. Project headings
are inserted before the output of each command is displayed. If the
command produces no output in a project, no heading is displayed.
The formatting convention used by -p is very suitable for some
types of searching, e.g. `repo forall -p -c git log -SFoo` will
print all commits that add or remove references to Foo.
The -v option causes '%prog' to display stderr messages if a
command produces output only on stderr. Normally the -p option
causes command output to be suppressed until the command produces
at least one byte of output on stdout.
Environment
-----------
pwd is the project's working directory.
pwd is the project's working directory. If the current client is
a mirror client, then pwd is the Git repository.
REPO_PROJECT is set to the unique name of the project.
REPO_PATH is the path relative the the root of the client.
REPO_REMOTE is the name of the remote system from the manifest.
REPO_LREV is the name of the revision from the manifest, translated
to a local tracking branch. If you need to pass the manifest
revision to a locally executed git command, use REPO_LREV.
REPO_RREV is the name of the revision from the manifest, exactly
as written in the manifest.
REPO__* are any extra environment variables, specified by the
"annotation" element under any project element. This can be useful
for differentiating trees based on user-specific criteria, or simply
annotating tree details.
shell positional arguments ($1, $2, .., $#) are set to any arguments
following <command>.
stdin, stdout, stderr are inherited from the terminal and are
not redirected.
Unless -p is used, stdin, stdout, stderr are inherited from the
terminal and are not redirected.
"""
def _Options(self, p):
@ -52,6 +105,17 @@ not redirected.
action='callback',
callback=cmd)
g = p.add_option_group('Output')
g.add_option('-p',
dest='project_header', action='store_true',
help='Show project headers before output')
g.add_option('-v', '--verbose',
dest='verbose', action='store_true',
help='Show command error messages')
def WantPager(self, opt):
return opt.project_header
def Execute(self, opt, args):
if not opt.command:
self.Usage()
@ -66,15 +130,128 @@ not redirected.
cmd.append(cmd[0])
cmd.extend(opt.command[1:])
if opt.project_header \
and not shell \
and cmd[0] == 'git':
# If this is a direct git command that can enable colorized
# output and the user prefers coloring, add --color into the
# command line because we are going to wrap the command into
# a pipe and git won't know coloring should activate.
#
for cn in cmd[1:]:
if not cn.startswith('-'):
break
if cn in _CAN_COLOR:
class ColorCmd(Coloring):
def __init__(self, config, cmd):
Coloring.__init__(self, config, cmd)
if ColorCmd(self.manifest.manifestProject.config, cn).is_on:
cmd.insert(cmd.index(cn) + 1, '--color')
mirror = self.manifest.IsMirror
out = ForallColoring(self.manifest.manifestProject.config)
out.redirect(sys.stdout)
rc = 0
first = True
for project in self.GetProjects(args):
env = dict(os.environ.iteritems())
env['REPO_PROJECT'] = project.name
env = os.environ.copy()
def setenv(name, val):
if val is None:
val = ''
env[name] = val.encode()
setenv('REPO_PROJECT', project.name)
setenv('REPO_PATH', project.relpath)
setenv('REPO_REMOTE', project.remote.name)
setenv('REPO_LREV', project.GetRevisionId())
setenv('REPO_RREV', project.revisionExpr)
for a in project.annotations:
setenv("REPO__%s" % (a.name), a.value)
if mirror:
setenv('GIT_DIR', project.gitdir)
cwd = project.gitdir
else:
cwd = project.worktree
if not os.path.exists(cwd):
if (opt.project_header and opt.verbose) \
or not opt.project_header:
print >>sys.stderr, 'skipping %s/' % project.relpath
continue
if opt.project_header:
stdin = subprocess.PIPE
stdout = subprocess.PIPE
stderr = subprocess.PIPE
else:
stdin = None
stdout = None
stderr = None
p = subprocess.Popen(cmd,
cwd = project.worktree,
cwd = cwd,
shell = shell,
env = env)
env = env,
stdin = stdin,
stdout = stdout,
stderr = stderr)
if opt.project_header:
class sfd(object):
def __init__(self, fd, dest):
self.fd = fd
self.dest = dest
def fileno(self):
return self.fd.fileno()
empty = True
didout = False
errbuf = ''
p.stdin.close()
s_in = [sfd(p.stdout, sys.stdout),
sfd(p.stderr, sys.stderr)]
for s in s_in:
flags = fcntl.fcntl(s.fd, fcntl.F_GETFL)
fcntl.fcntl(s.fd, fcntl.F_SETFL, flags | os.O_NONBLOCK)
while s_in:
in_ready, out_ready, err_ready = select.select(s_in, [], [])
for s in in_ready:
buf = s.fd.read(4096)
if not buf:
s.fd.close()
s_in.remove(s)
continue
if not opt.verbose:
if s.fd == p.stdout:
didout = True
else:
errbuf += buf
continue
if empty:
if first:
first = False
else:
out.nl()
out.project('project %s/', project.relpath)
out.nl()
out.flush()
if errbuf:
sys.stderr.write(errbuf)
sys.stderr.flush()
errbuf = ''
empty = False
s.dest.write(buf)
s.dest.flush()
r = p.wait()
if r != 0 and r != rc:
rc = r

243
subcmds/grep.py Normal file
View File

@ -0,0 +1,243 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from optparse import SUPPRESS_HELP
from color import Coloring
from command import PagedCommand
from git_command import git_require, GitCommand
class GrepColoring(Coloring):
def __init__(self, config):
Coloring.__init__(self, config, 'grep')
self.project = self.printer('project', attr='bold')
class Grep(PagedCommand):
common = True
helpSummary = "Print lines matching a pattern"
helpUsage = """
%prog {pattern | -e pattern} [<project>...]
"""
helpDescription = """
Search for the specified patterns in all project files.
Boolean Options
---------------
The following options can appear as often as necessary to express
the pattern to locate:
-e PATTERN
--and, --or, --not, -(, -)
Further, the -r/--revision option may be specified multiple times
in order to scan multiple trees. If the same file matches in more
than one tree, only the first result is reported, prefixed by the
revision name it was found under.
Examples
-------
Look for a line that has '#define' and either 'MAX_PATH or 'PATH_MAX':
repo grep -e '#define' --and -\( -e MAX_PATH -e PATH_MAX \)
Look for a line that has 'NODE' or 'Unexpected' in files that
contain a line that matches both expressions:
repo grep --all-match -e NODE -e Unexpected
"""
def _Options(self, p):
def carry(option,
opt_str,
value,
parser):
pt = getattr(parser.values, 'cmd_argv', None)
if pt is None:
pt = []
setattr(parser.values, 'cmd_argv', pt)
if opt_str == '-(':
pt.append('(')
elif opt_str == '-)':
pt.append(')')
else:
pt.append(opt_str)
if value is not None:
pt.append(value)
g = p.add_option_group('Sources')
g.add_option('--cached',
action='callback', callback=carry,
help='Search the index, instead of the work tree')
g.add_option('-r','--revision',
dest='revision', action='append', metavar='TREEish',
help='Search TREEish, instead of the work tree')
g = p.add_option_group('Pattern')
g.add_option('-e',
action='callback', callback=carry,
metavar='PATTERN', type='str',
help='Pattern to search for')
g.add_option('-i', '--ignore-case',
action='callback', callback=carry,
help='Ignore case differences')
g.add_option('-a','--text',
action='callback', callback=carry,
help="Process binary files as if they were text")
g.add_option('-I',
action='callback', callback=carry,
help="Don't match the pattern in binary files")
g.add_option('-w', '--word-regexp',
action='callback', callback=carry,
help='Match the pattern only at word boundaries')
g.add_option('-v', '--invert-match',
action='callback', callback=carry,
help='Select non-matching lines')
g.add_option('-G', '--basic-regexp',
action='callback', callback=carry,
help='Use POSIX basic regexp for patterns (default)')
g.add_option('-E', '--extended-regexp',
action='callback', callback=carry,
help='Use POSIX extended regexp for patterns')
g.add_option('-F', '--fixed-strings',
action='callback', callback=carry,
help='Use fixed strings (not regexp) for pattern')
g = p.add_option_group('Pattern Grouping')
g.add_option('--all-match',
action='callback', callback=carry,
help='Limit match to lines that have all patterns')
g.add_option('--and', '--or', '--not',
action='callback', callback=carry,
help='Boolean operators to combine patterns')
g.add_option('-(','-)',
action='callback', callback=carry,
help='Boolean operator grouping')
g = p.add_option_group('Output')
g.add_option('-n',
action='callback', callback=carry,
help='Prefix the line number to matching lines')
g.add_option('-C',
action='callback', callback=carry,
metavar='CONTEXT', type='str',
help='Show CONTEXT lines around match')
g.add_option('-B',
action='callback', callback=carry,
metavar='CONTEXT', type='str',
help='Show CONTEXT lines before match')
g.add_option('-A',
action='callback', callback=carry,
metavar='CONTEXT', type='str',
help='Show CONTEXT lines after match')
g.add_option('-l','--name-only','--files-with-matches',
action='callback', callback=carry,
help='Show only file names containing matching lines')
g.add_option('-L','--files-without-match',
action='callback', callback=carry,
help='Show only file names not containing matching lines')
def Execute(self, opt, args):
out = GrepColoring(self.manifest.manifestProject.config)
cmd_argv = ['grep']
if out.is_on and git_require((1,6,3)):
cmd_argv.append('--color')
cmd_argv.extend(getattr(opt,'cmd_argv',[]))
if '-e' not in cmd_argv:
if not args:
self.Usage()
cmd_argv.append('-e')
cmd_argv.append(args[0])
args = args[1:]
projects = self.GetProjects(args)
full_name = False
if len(projects) > 1:
cmd_argv.append('--full-name')
full_name = True
have_rev = False
if opt.revision:
if '--cached' in cmd_argv:
print >>sys.stderr,\
'fatal: cannot combine --cached and --revision'
sys.exit(1)
have_rev = True
cmd_argv.extend(opt.revision)
cmd_argv.append('--')
bad_rev = False
have_match = False
for project in projects:
p = GitCommand(project,
cmd_argv,
bare = False,
capture_stdout = True,
capture_stderr = True)
if p.Wait() != 0:
# no results
#
if p.stderr:
if have_rev and 'fatal: ambiguous argument' in p.stderr:
bad_rev = True
else:
out.project('--- project %s ---' % project.relpath)
out.nl()
out.write("%s", p.stderr)
out.nl()
continue
have_match = True
# We cut the last element, to avoid a blank line.
#
r = p.stdout.split('\n')
r = r[0:-1]
if have_rev and full_name:
for line in r:
rev, line = line.split(':', 1)
out.write("%s", rev)
out.write(':')
out.project(project.relpath)
out.write('/')
out.write("%s", line)
out.nl()
elif full_name:
for line in r:
out.project(project.relpath)
out.write('/')
out.write("%s", line)
out.nl()
else:
for line in r:
print line
if have_match:
sys.exit(0)
elif have_rev and bad_rev:
for r in opt.revision:
print >>sys.stderr, "error: can't search revision %s" % r
sys.exit(1)
else:
sys.exit(1)

View File

@ -13,13 +13,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import re
import sys
from formatter import AbstractFormatter, DumbWriter
from color import Coloring
from command import PagedCommand
from command import PagedCommand, MirrorSafeCommand
class Help(PagedCommand):
class Help(PagedCommand, MirrorSafeCommand):
common = False
helpSummary = "Display detailed help on a command"
helpUsage = """
@ -77,6 +78,7 @@ The most commonly used repo commands are:
print fmt % (name, summary)
print """
See 'repo help <command>' for more information on a specific command.
See 'repo help --all' for a complete list of recognized commands.
"""
def _PrintCommandHelp(self, cmd):
@ -92,6 +94,8 @@ See 'repo help <command>' for more information on a specific command.
body = getattr(cmd, bodyAttr)
except AttributeError:
return
if body == '' or body is None:
return
self.nl()
@ -105,19 +109,39 @@ See 'repo help <command>' for more information on a specific command.
body = body.strip()
body = body.replace('%prog', me)
asciidoc_hdr = re.compile(r'^\n?([^\n]{1,})\n([=~-]{2,})$')
for para in body.split("\n\n"):
if para.startswith(' '):
self.write('%s', para)
self.nl()
self.nl()
else:
self.wrap.add_flowing_data(para)
self.wrap.end_paragraph(1)
continue
m = asciidoc_hdr.match(para)
if m:
title = m.group(1)
type = m.group(2)
if type[0] in ('=', '-'):
p = self.heading
else:
def _p(fmt, *args):
self.write(' ')
self.heading(fmt, *args)
p = _p
p('%s', title)
self.nl()
p('%s', ''.ljust(len(title),type[0]))
self.nl()
continue
self.wrap.add_flowing_data(para)
self.wrap.end_paragraph(1)
self.wrap.end_paragraph(0)
out = _Out(self.manifest.globalConfig)
cmd.OptionParser.print_help()
out._PrintSection('Summary', 'helpSummary')
cmd.OptionParser.print_help()
out._PrintSection('Description', 'helpDescription')
def _Options(self, p):
@ -141,6 +165,7 @@ See 'repo help <command>' for more information on a specific command.
print >>sys.stderr, "repo: '%s' is not a repo command." % name
sys.exit(1)
cmd.manifest = self.manifest
self._PrintCommandHelp(cmd)
else:

View File

@ -14,15 +14,19 @@
# limitations under the License.
import os
import platform
import re
import shutil
import sys
from color import Coloring
from command import InteractiveCommand
from command import InteractiveCommand, MirrorSafeCommand
from error import ManifestParseError
from remote import Remote
from git_command import git, MIN_GIT_VERSION
from project import SyncBuffer
from git_config import GitConfig
from git_command import git_require, MIN_GIT_VERSION
class Init(InteractiveCommand):
class Init(InteractiveCommand, MirrorSafeCommand):
common = True
helpSummary = "Initialize repo in the current directory"
helpUsage = """
@ -34,9 +38,27 @@ The latest repo source code and manifest collection is downloaded
from the server and is installed in the .repo/ directory in the
current working directory.
The optional <manifest> argument can be used to specify an alternate
manifest to be used. If no manifest is specified, the manifest
default.xml will be used.
The optional -b argument can be used to select the manifest branch
to checkout and use. If no branch is specified, master is assumed.
The optional -m argument can be used to specify an alternate manifest
to be used. If no manifest is specified, the manifest default.xml
will be used.
The --reference option can be used to point to a directory that
has the content of a --mirror sync. This will make the working
directory use as much data as possible from the local reference
directory when fetching from the server. This will make the sync
go a lot faster by reducing data traffic on the network.
Switching Manifest Branches
---------------------------
To switch to another manifest branch, `repo init -b otherbranch`
may be used in an existing client. However, as this only updates the
manifest, a subsequent `repo sync` (or `repo sync -d`) is necessary
to update the working directory files.
"""
def _Options(self, p):
@ -60,10 +82,24 @@ default.xml will be used.
g.add_option('--mirror',
dest='mirror', action='store_true',
help='mirror the forrest')
g.add_option('--reference',
dest='reference',
help='location of mirror directory', metavar='DIR')
g.add_option('--depth', type='int', default=None,
dest='depth',
help='create a shallow clone with given depth; see git clone')
g.add_option('-g', '--groups',
dest='groups', default='default',
help='restrict manifest projects to ones with a specified group',
metavar='GROUP')
g.add_option('-p', '--platform',
dest='platform', default='auto',
help='restrict manifest projects to ones with a specified'
'platform group [auto|all|none|linux|darwin|...]',
metavar='PLATFORM')
# Tool
g = p.add_option_group('Version options')
g = p.add_option_group('repo Version options')
g.add_option('--repo-url',
dest='repo_url',
help='repo repository location', metavar='URL')
@ -74,39 +110,33 @@ default.xml will be used.
dest='no_repo_verify', action='store_true',
help='do not verify repo source code')
def _CheckGitVersion(self):
ver_str = git.version()
if not ver_str.startswith('git version '):
print >>sys.stderr, 'error: "%s" unsupported' % ver_str
sys.exit(1)
ver_str = ver_str[len('git version '):].strip()
ver_act = tuple(map(lambda x: int(x), ver_str.split('.')[0:3]))
if ver_act < MIN_GIT_VERSION:
need = '.'.join(map(lambda x: str(x), MIN_GIT_VERSION))
print >>sys.stderr, 'fatal: git %s or later required' % need
sys.exit(1)
# Other
g = p.add_option_group('Other options')
g.add_option('--config-name',
dest='config_name', action="store_true", default=False,
help='Always prompt for name/e-mail')
def _SyncManifest(self, opt):
m = self.manifest.manifestProject
is_new = not m.Exists
if not m.Exists:
if is_new:
if not opt.manifest_url:
print >>sys.stderr, 'fatal: manifest url (-u) is required.'
sys.exit(1)
if not opt.quiet:
print >>sys.stderr, 'Getting manifest ...'
print >>sys.stderr, ' from %s' % opt.manifest_url
print >>sys.stderr, 'Get %s' \
% GitConfig.ForUser().UrlInsteadOf(opt.manifest_url)
m._InitGitDir()
if opt.manifest_branch:
m.revision = opt.manifest_branch
m.revisionExpr = opt.manifest_branch
else:
m.revision = 'refs/heads/master'
m.revisionExpr = 'refs/heads/master'
else:
if opt.manifest_branch:
m.revision = opt.manifest_branch
m.revisionExpr = opt.manifest_branch
else:
m.PreSync()
@ -116,12 +146,55 @@ default.xml will be used.
r.ResetFetch()
r.Save()
if opt.mirror:
m.config.SetString('repo.mirror', 'true')
groups = re.split('[,\s]+', opt.groups)
all_platforms = ['linux', 'darwin']
platformize = lambda x: 'platform-' + x
if opt.platform == 'auto':
if (not opt.mirror and
not m.config.GetString('repo.mirror') == 'true'):
groups.append(platformize(platform.system().lower()))
elif opt.platform == 'all':
groups.extend(map(platformize, all_platforms))
elif opt.platform in all_platforms:
groups.extend(platformize(opt.platform))
elif opt.platform != 'none':
print >>sys.stderr, 'fatal: invalid platform flag'
sys.exit(1)
m.Sync_NetworkHalf()
m.Sync_LocalHalf()
m.StartBranch('default')
groups = [x for x in groups if x]
groupstr = ','.join(groups)
if opt.platform == 'auto' and groupstr == 'default,platform-' + platform.system().lower():
groupstr = None
m.config.SetString('manifest.groups', groupstr)
if opt.reference:
m.config.SetString('repo.reference', opt.reference)
if opt.mirror:
if is_new:
m.config.SetString('repo.mirror', 'true')
else:
print >>sys.stderr, 'fatal: --mirror not supported on existing client'
sys.exit(1)
if not m.Sync_NetworkHalf(is_new=is_new):
r = m.GetRemote(m.remote.name)
print >>sys.stderr, 'fatal: cannot obtain manifest %s' % r.url
# Better delete the manifest git dir if we created it; otherwise next
# time (when user fixes problems) we won't go through the "is_new" logic.
if is_new:
shutil.rmtree(m.gitdir)
sys.exit(1)
syncbuf = SyncBuffer(m.config)
m.Sync_LocalHalf(syncbuf)
syncbuf.Finish()
if is_new or m.CurrentBranch is None:
if not m.StartBranch('default'):
print >>sys.stderr, 'fatal: cannot create default in manifest'
sys.exit(1)
def _LinkManifest(self, name):
if not name:
@ -135,20 +208,52 @@ default.xml will be used.
print >>sys.stderr, 'fatal: %s' % str(e)
sys.exit(1)
def _PromptKey(self, prompt, key, value):
def _Prompt(self, prompt, value):
mp = self.manifest.manifestProject
sys.stdout.write('%-10s [%s]: ' % (prompt, value))
a = sys.stdin.readline().strip()
if a != '' and a != value:
mp.config.SetString(key, a)
if a == '':
return value
return a
def _ShouldConfigureUser(self):
gc = self.manifest.globalConfig
mp = self.manifest.manifestProject
# If we don't have local settings, get from global.
if not mp.config.Has('user.name') or not mp.config.Has('user.email'):
if not gc.Has('user.name') or not gc.Has('user.email'):
return True
mp.config.SetString('user.name', gc.GetString('user.name'))
mp.config.SetString('user.email', gc.GetString('user.email'))
print ''
print 'Your identity is: %s <%s>' % (mp.config.GetString('user.name'),
mp.config.GetString('user.email'))
print 'If you want to change this, please re-run \'repo init\' with --config-name'
return False
def _ConfigureUser(self):
mp = self.manifest.manifestProject
print ''
self._PromptKey('Your Name', 'user.name', mp.UserName)
self._PromptKey('Your Email', 'user.email', mp.UserEmail)
while True:
print ''
name = self._Prompt('Your Name', mp.UserName)
email = self._Prompt('Your Email', mp.UserEmail)
print ''
print 'Your identity is: %s <%s>' % (name, email)
sys.stdout.write('is this correct [y/N]? ')
a = sys.stdin.readline().strip()
if a in ('yes', 'y', 't', 'true'):
break
if name != mp.UserName:
mp.config.SetString('user.name', name)
if email != mp.UserEmail:
mp.config.SetString('user.email', email)
def _HasColorSet(self, gc):
for n in ['ui', 'diff', 'status']:
@ -182,21 +287,43 @@ default.xml will be used.
out.printer(fg='black', attr=c)(' %-6s ', c)
out.nl()
sys.stdout.write('Enable color display in this user account (y/n)? ')
sys.stdout.write('Enable color display in this user account (y/N)? ')
a = sys.stdin.readline().strip().lower()
if a in ('y', 'yes', 't', 'true', 'on'):
gc.SetString('color.ui', 'auto')
def _ConfigureDepth(self, opt):
"""Configure the depth we'll sync down.
Args:
opt: Options from optparse. We care about opt.depth.
"""
# Opt.depth will be non-None if user actually passed --depth to repo init.
if opt.depth is not None:
if opt.depth > 0:
# Positive values will set the depth.
depth = str(opt.depth)
else:
# Negative numbers will clear the depth; passing None to SetString
# will do that.
depth = None
# We store the depth in the main manifest project.
self.manifest.manifestProject.config.SetString('repo.depth', depth)
def Execute(self, opt, args):
self._CheckGitVersion()
git_require(MIN_GIT_VERSION, fail=True)
self._SyncManifest(opt)
self._LinkManifest(opt.manifest_name)
if os.isatty(0) and os.isatty(1) and not opt.mirror:
self._ConfigureUser()
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
if opt.mirror:
self._ConfigureDepth(opt)
if self.manifest.IsMirror:
type = 'mirror '
else:
type = ''

48
subcmds/list.py Normal file
View File

@ -0,0 +1,48 @@
#
# Copyright (C) 2011 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from command import Command, MirrorSafeCommand
class List(Command, MirrorSafeCommand):
common = True
helpSummary = "List projects and their associated directories"
helpUsage = """
%prog [<project>...]
"""
helpDescription = """
List all projects; pass '.' to list the project for the cwd.
This is similar to running: repo forall -c 'echo "$REPO_PATH : $REPO_PROJECT"'.
"""
def Execute(self, opt, args):
"""List all projects and the associated directories.
This may be possible to do with 'repo forall', but repo newbies have
trouble figuring that out. The idea here is that it should be more
discoverable.
Args:
opt: The options. We don't take any.
args: Positional args. Can be a list of projects to list, or empty.
"""
projects = self.GetProjects(args)
lines = []
for project in projects:
lines.append("%s : %s" % (project.relpath, project.name))
lines.sort()
print '\n'.join(lines)

77
subcmds/manifest.py Normal file
View File

@ -0,0 +1,77 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
from command import PagedCommand
class Manifest(PagedCommand):
common = False
helpSummary = "Manifest inspection utility"
helpUsage = """
%prog [-o {-|NAME.xml} [-r]]
"""
_helpDescription = """
With the -o option, exports the current manifest for inspection.
The manifest and (if present) local_manifest.xml are combined
together to produce a single manifest file. This file can be stored
in a Git repository for use during future 'repo init' invocations.
"""
@property
def helpDescription(self):
help = self._helpDescription + '\n'
r = os.path.dirname(__file__)
r = os.path.dirname(r)
fd = open(os.path.join(r, 'docs', 'manifest-format.txt'))
for line in fd:
help += line
fd.close()
return help
def _Options(self, p):
p.add_option('-r', '--revision-as-HEAD',
dest='peg_rev', action='store_true',
help='Save revisions as current HEAD')
p.add_option('-o', '--output-file',
dest='output_file',
help='File to save the manifest to',
metavar='-|NAME.xml')
def _Output(self, opt):
if opt.output_file == '-':
fd = sys.stdout
else:
fd = open(opt.output_file, 'w')
self.manifest.Save(fd,
peg_rev = opt.peg_rev)
fd.close()
if opt.output_file != '-':
print >>sys.stderr, 'Saved manifest to %s' % opt.output_file
def Execute(self, opt, args):
if args:
self.Usage()
if opt.output_file is not None:
self._Output(opt)
return
print >>sys.stderr, 'error: no operation to perform'
print >>sys.stderr, 'error: see repo help manifest'
sys.exit(1)

107
subcmds/rebase.py Normal file
View File

@ -0,0 +1,107 @@
#
# Copyright (C) 2010 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from command import Command
from git_command import GitCommand
from git_refs import GitRefs, HEAD, R_HEADS, R_TAGS, R_PUB, R_M
from error import GitError
class Rebase(Command):
common = True
helpSummary = "Rebase local branches on upstream branch"
helpUsage = """
%prog {[<project>...] | -i <project>...}
"""
helpDescription = """
'%prog' uses git rebase to move local changes in the current topic branch to
the HEAD of the upstream history, useful when you have made commits in a topic
branch but need to incorporate new upstream changes "underneath" them.
"""
def _Options(self, p):
p.add_option('-i', '--interactive',
dest="interactive", action="store_true",
help="interactive rebase (single project only)")
p.add_option('-f', '--force-rebase',
dest='force_rebase', action='store_true',
help='Pass --force-rebase to git rebase')
p.add_option('--no-ff',
dest='no_ff', action='store_true',
help='Pass --no-ff to git rebase')
p.add_option('-q', '--quiet',
dest='quiet', action='store_true',
help='Pass --quiet to git rebase')
p.add_option('--autosquash',
dest='autosquash', action='store_true',
help='Pass --autosquash to git rebase')
p.add_option('--whitespace',
dest='whitespace', action='store', metavar='WS',
help='Pass --whitespace to git rebase')
def Execute(self, opt, args):
all = self.GetProjects(args)
one_project = len(all) == 1
if opt.interactive and not one_project:
print >>sys.stderr, 'error: interactive rebase not supported with multiple projects'
return -1
for project in all:
cb = project.CurrentBranch
if not cb:
if one_project:
print >>sys.stderr, "error: project %s has a detatched HEAD" % project.relpath
return -1
# ignore branches with detatched HEADs
continue
upbranch = project.GetBranch(cb)
if not upbranch.LocalMerge:
if one_project:
print >>sys.stderr, "error: project %s does not track any remote branches" % project.relpath
return -1
# ignore branches without remotes
continue
args = ["rebase"]
if opt.whitespace:
args.append('--whitespace=%s' % opt.whitespace)
if opt.quiet:
args.append('--quiet')
if opt.force_rebase:
args.append('--force-rebase')
if opt.no_ff:
args.append('--no-ff')
if opt.autosquash:
args.append('--autosquash')
if opt.interactive:
args.append("-i")
args.append(upbranch.LocalMerge)
print >>sys.stderr, '# %s: rebasing %s -> %s' % \
(project.relpath, cb, upbranch.LocalMerge)
if GitCommand(project, args).Wait() != 0:
return -1

61
subcmds/selfupdate.py Normal file
View File

@ -0,0 +1,61 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from optparse import SUPPRESS_HELP
import sys
from command import Command, MirrorSafeCommand
from subcmds.sync import _PostRepoUpgrade
from subcmds.sync import _PostRepoFetch
class Selfupdate(Command, MirrorSafeCommand):
common = False
helpSummary = "Update repo to the latest version"
helpUsage = """
%prog
"""
helpDescription = """
The '%prog' command upgrades repo to the latest version, if a
newer version is available.
Normally this is done automatically by 'repo sync' and does not
need to be performed by an end-user.
"""
def _Options(self, p):
g = p.add_option_group('repo Version options')
g.add_option('--no-repo-verify',
dest='no_repo_verify', action='store_true',
help='do not verify repo source code')
g.add_option('--repo-upgraded',
dest='repo_upgraded', action='store_true',
help=SUPPRESS_HELP)
def Execute(self, opt, args):
rp = self.manifest.repoProject
rp.PreSync()
if opt.repo_upgraded:
_PostRepoUpgrade(self.manifest)
else:
if not rp.Sync_NetworkHalf():
print >>sys.stderr, "error: can't update repo"
sys.exit(1)
rp.bare_git.gc('--auto')
_PostRepoFetch(rp,
no_repo_verify = opt.no_repo_verify,
verbose = True)

33
subcmds/smartsync.py Normal file
View File

@ -0,0 +1,33 @@
#
# Copyright (C) 2010 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sync import Sync
class Smartsync(Sync):
common = True
helpSummary = "Update working tree to the latest known good revision"
helpUsage = """
%prog [<project>...]
"""
helpDescription = """
The '%prog' command is a shortcut for sync -s.
"""
def _Options(self, p):
Sync._Options(self, p, show_smart=False)
def Execute(self, opt, args):
opt.smart_sync = True
Sync.Execute(self, opt, args)

View File

@ -55,12 +55,12 @@ The '%prog' command stages files to prepare the next commit.
out = _ProjectList(self.manifest.manifestProject.config)
while True:
out.header(' %-20s %s', 'project', 'path')
out.header(' %s', 'project')
out.nl()
for i in xrange(0, len(all)):
p = all[i]
out.write('%3d: %-20s %s', i + 1, p.name, p.relpath + '/')
out.write('%3d: %s', i + 1, p.relpath + '/')
out.nl()
out.nl()

View File

@ -16,27 +16,23 @@
import sys
from command import Command
from git_command import git
from progress import Progress
class Start(Command):
common = True
helpSummary = "Start a new branch for development"
helpUsage = """
%prog <newbranchname> [<project>...]
This subcommand starts a new branch of development that is automatically
pulled from a remote branch.
It is equivalent to the following git commands:
"git branch --track <newbranchname> m/<codeline>",
or
"git checkout --track -b <newbranchname> m/<codeline>".
All three forms set up the config entries that repo bases some of its
processing on. Use %prog or git branch or checkout with --track to ensure
the configuration data is set up properly.
%prog <newbranchname> [--all | <project>...]
"""
helpDescription = """
'%prog' begins a new branch of development, starting from the
revision specified in the manifest.
"""
def _Options(self, p):
p.add_option('--all',
dest='all', action='store_true',
help='begin branch in all projects')
def Execute(self, opt, args):
if not args:
@ -47,5 +43,26 @@ the configuration data is set up properly.
print >>sys.stderr, "error: '%s' is not a valid name" % nb
sys.exit(1)
for project in self.GetProjects(args[1:]):
project.StartBranch(nb)
err = []
projects = []
if not opt.all:
projects = args[1:]
if len(projects) < 1:
print >>sys.stderr, "error: at least one project must be specified"
sys.exit(1)
all = self.GetProjects(projects)
pm = Progress('Starting %s' % nb, len(all))
for project in all:
pm.update()
if not project.StartBranch(nb):
err.append(project)
pm.end()
if err:
for p in err:
print >>sys.stderr,\
"error: %s/: cannot start %s" \
% (p.relpath, nb)
sys.exit(1)

View File

@ -15,13 +15,117 @@
from command import PagedCommand
try:
import threading as _threading
except ImportError:
import dummy_threading as _threading
import itertools
import sys
import StringIO
class Status(PagedCommand):
common = True
helpSummary = "Show the working tree status"
helpUsage = """
%prog [<project>...]
"""
helpDescription = """
'%prog' compares the working tree to the staging area (aka index),
and the most recent commit on this branch (HEAD), in each project
specified. A summary is displayed, one line per file where there
is a difference between these three states.
The -j/--jobs option can be used to run multiple status queries
in parallel.
Status Display
--------------
The status display is organized into three columns of information,
for example if the file 'subcmds/status.py' is modified in the
project 'repo' on branch 'devwork':
project repo/ branch devwork
-m subcmds/status.py
The first column explains how the staging area (index) differs from
the last commit (HEAD). Its values are always displayed in upper
case and have the following meanings:
-: no difference
A: added (not in HEAD, in index )
M: modified ( in HEAD, in index, different content )
D: deleted ( in HEAD, not in index )
R: renamed (not in HEAD, in index, path changed )
C: copied (not in HEAD, in index, copied from another)
T: mode changed ( in HEAD, in index, same content )
U: unmerged; conflict resolution required
The second column explains how the working directory differs from
the index. Its values are always displayed in lower case and have
the following meanings:
-: new / unknown (not in index, in work tree )
m: modified ( in index, in work tree, modified )
d: deleted ( in index, not in work tree )
"""
def _Options(self, p):
p.add_option('-j', '--jobs',
dest='jobs', action='store', type='int', default=2,
help="number of projects to check simultaneously")
def _StatusHelper(self, project, clean_counter, sem, output):
"""Obtains the status for a specific project.
Obtains the status for a project, redirecting the output to
the specified object. It will release the semaphore
when done.
Args:
project: Project to get status of.
clean_counter: Counter for clean projects.
sem: Semaphore, will call release() when complete.
output: Where to output the status.
"""
try:
state = project.PrintWorkTreeStatus(output)
if state == 'CLEAN':
clean_counter.next()
finally:
sem.release()
def Execute(self, opt, args):
for project in self.GetProjects(args):
project.PrintWorkTreeStatus()
all = self.GetProjects(args)
counter = itertools.count()
if opt.jobs == 1:
for project in all:
state = project.PrintWorkTreeStatus()
if state == 'CLEAN':
counter.next()
else:
sem = _threading.Semaphore(opt.jobs)
threads_and_output = []
for project in all:
sem.acquire()
class BufList(StringIO.StringIO):
def dump(self, ostream):
for entry in self.buflist:
ostream.write(entry)
output = BufList()
t = _threading.Thread(target=self._StatusHelper,
args=(project, counter, sem, output))
threads_and_output.append((t, output))
t.start()
for (t, output) in threads_and_output:
t.join()
output.dump(sys.stdout)
output.close()
if len(all) == counter.next():
print 'nothing to commit (working directory clean)'

View File

@ -13,17 +13,46 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from optparse import SUPPRESS_HELP
import os
import re
import shutil
import socket
import subprocess
import sys
import time
import xmlrpclib
try:
import threading as _threading
except ImportError:
import dummy_threading as _threading
try:
import resource
def _rlimit_nofile():
return resource.getrlimit(resource.RLIMIT_NOFILE)
except ImportError:
def _rlimit_nofile():
return (256, 256)
from git_command import GIT
from command import Command
from git_refs import R_HEADS
from project import HEAD
from project import Project
from project import RemoteSpec
from command import Command, MirrorSafeCommand
from error import RepoChangedException, GitError
from project import R_HEADS
from project import SyncBuffer
from progress import Progress
class Sync(Command):
class _FetchError(Exception):
"""Internal error thrown in _FetchHelper() when we don't want stack trace."""
pass
class Sync(Command, MirrorSafeCommand):
jobs = 1
common = True
helpSummary = "Update working tree to the latest revision"
helpUsage = """
@ -43,27 +72,341 @@ line. Projects can be specified either by name, or by a relative
or absolute path to the project's local directory. If no projects
are specified, '%prog' will synchronize all projects listed in
the manifest.
The -d/--detach option can be used to switch specified projects
back to the manifest revision. This option is especially helpful
if the project is currently on a topic branch, but the manifest
revision is temporarily needed.
The -s/--smart-sync option can be used to sync to a known good
build as specified by the manifest-server element in the current
manifest. The -t/--smart-tag option is similar and allows you to
specify a custom tag/label.
The -f/--force-broken option can be used to proceed with syncing
other projects if a project sync fails.
The --no-clone-bundle option disables any attempt to use
$URL/clone.bundle to bootstrap a new Git repository from a
resumeable bundle file on a content delivery network. This
may be necessary if there are problems with the local Python
HTTP client or proxy configuration, but the Git binary works.
SSH Connections
---------------
If at least one project remote URL uses an SSH connection (ssh://,
git+ssh://, or user@host:path syntax) repo will automatically
enable the SSH ControlMaster option when connecting to that host.
This feature permits other projects in the same '%prog' session to
reuse the same SSH tunnel, saving connection setup overheads.
To disable this behavior on UNIX platforms, set the GIT_SSH
environment variable to 'ssh'. For example:
export GIT_SSH=ssh
%prog
Compatibility
~~~~~~~~~~~~~
This feature is automatically disabled on Windows, due to the lack
of UNIX domain socket support.
This feature is not compatible with url.insteadof rewrites in the
user's ~/.gitconfig. '%prog' is currently not able to perform the
rewrite early enough to establish the ControlMaster tunnel.
If the remote SSH daemon is Gerrit Code Review, version 2.0.10 or
later is required to fix a server side protocol bug.
"""
def _Options(self, p):
p.add_option('--no-repo-verify',
def _Options(self, p, show_smart=True):
self.jobs = self.manifest.default.sync_j
p.add_option('-f', '--force-broken',
dest='force_broken', action='store_true',
help="continue sync even if a project fails to sync")
p.add_option('-l','--local-only',
dest='local_only', action='store_true',
help="only update working tree, don't fetch")
p.add_option('-n','--network-only',
dest='network_only', action='store_true',
help="fetch only, don't update working tree")
p.add_option('-d','--detach',
dest='detach_head', action='store_true',
help='detach projects back to manifest revision')
p.add_option('-c','--current-branch',
dest='current_branch_only', action='store_true',
help='fetch only current branch from server')
p.add_option('-q','--quiet',
dest='quiet', action='store_true',
help='be more quiet')
p.add_option('-j','--jobs',
dest='jobs', action='store', type='int',
help="projects to fetch simultaneously (default %d)" % self.jobs)
p.add_option('-m', '--manifest-name',
dest='manifest_name',
help='temporary manifest to use for this sync', metavar='NAME.xml')
p.add_option('--no-clone-bundle',
dest='no_clone_bundle', action='store_true',
help='disable use of /clone.bundle on HTTP/HTTPS')
if show_smart:
p.add_option('-s', '--smart-sync',
dest='smart_sync', action='store_true',
help='smart sync using manifest from a known good build')
p.add_option('-t', '--smart-tag',
dest='smart_tag', action='store',
help='smart sync using manifest from a known tag')
g = p.add_option_group('repo Version options')
g.add_option('--no-repo-verify',
dest='no_repo_verify', action='store_true',
help='do not verify repo source code')
p.add_option('--repo-upgraded',
g.add_option('--repo-upgraded',
dest='repo_upgraded', action='store_true',
help='perform additional actions after a repo upgrade')
help=SUPPRESS_HELP)
def _Fetch(self, *projects):
def _FetchHelper(self, opt, project, lock, fetched, pm, sem, err_event):
"""Main function of the fetch threads when jobs are > 1.
Args:
opt: Program options returned from optparse. See _Options().
project: Project object for the project to fetch.
lock: Lock for accessing objects that are shared amongst multiple
_FetchHelper() threads.
fetched: set object that we will add project.gitdir to when we're done
(with our lock held).
pm: Instance of a Project object. We will call pm.update() (with our
lock held).
sem: We'll release() this semaphore when we exit so that another thread
can be started up.
err_event: We'll set this event in the case of an error (after printing
out info about the error).
"""
# We'll set to true once we've locked the lock.
did_lock = False
# Encapsulate everything in a try/except/finally so that:
# - We always set err_event in the case of an exception.
# - We always make sure we call sem.release().
# - We always make sure we unlock the lock if we locked it.
try:
try:
success = project.Sync_NetworkHalf(
quiet=opt.quiet,
current_branch_only=opt.current_branch_only,
clone_bundle=not opt.no_clone_bundle)
# Lock around all the rest of the code, since printing, updating a set
# and Progress.update() are not thread safe.
lock.acquire()
did_lock = True
if not success:
print >>sys.stderr, 'error: Cannot fetch %s' % project.name
if opt.force_broken:
print >>sys.stderr, 'warn: --force-broken, continuing to sync'
else:
raise _FetchError()
fetched.add(project.gitdir)
pm.update()
except _FetchError:
err_event.set()
except:
err_event.set()
raise
finally:
if did_lock:
lock.release()
sem.release()
def _Fetch(self, projects, opt):
fetched = set()
for project in projects:
if project.Sync_NetworkHalf():
fetched.add(project.gitdir)
else:
print >>sys.stderr, 'error: Cannot fetch %s' % project.name
pm = Progress('Fetching projects', len(projects))
if self.jobs == 1:
for project in projects:
pm.update()
if project.Sync_NetworkHalf(quiet=opt.quiet,
current_branch_only=opt.current_branch_only):
fetched.add(project.gitdir)
else:
print >>sys.stderr, 'error: Cannot fetch %s' % project.name
if opt.force_broken:
print >>sys.stderr, 'warn: --force-broken, continuing to sync'
else:
sys.exit(1)
else:
threads = set()
lock = _threading.Lock()
sem = _threading.Semaphore(self.jobs)
err_event = _threading.Event()
for project in projects:
# Check for any errors before starting any new threads.
# ...we'll let existing threads finish, though.
if err_event.isSet():
break
sem.acquire()
t = _threading.Thread(target = self._FetchHelper,
args = (opt,
project,
lock,
fetched,
pm,
sem,
err_event))
threads.add(t)
t.start()
for t in threads:
t.join()
# If we saw an error, exit with code 1 so that other scripts can check.
if err_event.isSet():
print >>sys.stderr, '\nerror: Exited sync due to fetch errors'
sys.exit(1)
pm.end()
for project in projects:
project.bare_git.gc('--auto')
return fetched
def UpdateProjectList(self):
new_project_paths = []
for project in self.GetProjects(None, missing_ok=True):
if project.relpath:
new_project_paths.append(project.relpath)
file_name = 'project.list'
file_path = os.path.join(self.manifest.repodir, file_name)
old_project_paths = []
if os.path.exists(file_path):
fd = open(file_path, 'r')
try:
old_project_paths = fd.read().split('\n')
finally:
fd.close()
for path in old_project_paths:
if not path:
continue
if path not in new_project_paths:
"""If the path has already been deleted, we don't need to do it
"""
if os.path.exists(self.manifest.topdir + '/' + path):
project = Project(
manifest = self.manifest,
name = path,
remote = RemoteSpec('origin'),
gitdir = os.path.join(self.manifest.topdir,
path, '.git'),
worktree = os.path.join(self.manifest.topdir, path),
relpath = path,
revisionExpr = 'HEAD',
revisionId = None,
groups = None)
if project.IsDirty():
print >>sys.stderr, 'error: Cannot remove project "%s": \
uncommitted changes are present' % project.relpath
print >>sys.stderr, ' commit changes, then run sync again'
return -1
else:
print >>sys.stderr, 'Deleting obsolete path %s' % project.worktree
shutil.rmtree(project.worktree)
# Try deleting parent subdirs if they are empty
dir = os.path.dirname(project.worktree)
while dir != self.manifest.topdir:
try:
os.rmdir(dir)
except OSError:
break
dir = os.path.dirname(dir)
new_project_paths.sort()
fd = open(file_path, 'w')
try:
fd.write('\n'.join(new_project_paths))
fd.write('\n')
finally:
fd.close()
return 0
def Execute(self, opt, args):
if opt.jobs:
self.jobs = opt.jobs
if self.jobs > 1:
soft_limit, _ = _rlimit_nofile()
self.jobs = min(self.jobs, (soft_limit - 5) / 3)
if opt.network_only and opt.detach_head:
print >>sys.stderr, 'error: cannot combine -n and -d'
sys.exit(1)
if opt.network_only and opt.local_only:
print >>sys.stderr, 'error: cannot combine -n and -l'
sys.exit(1)
if opt.manifest_name and opt.smart_sync:
print >>sys.stderr, 'error: cannot combine -m and -s'
sys.exit(1)
if opt.manifest_name and opt.smart_tag:
print >>sys.stderr, 'error: cannot combine -m and -t'
sys.exit(1)
if opt.manifest_name:
self.manifest.Override(opt.manifest_name)
if opt.smart_sync or opt.smart_tag:
if not self.manifest.manifest_server:
print >>sys.stderr, \
'error: cannot smart sync: no manifest server defined in manifest'
sys.exit(1)
try:
server = xmlrpclib.Server(self.manifest.manifest_server)
if opt.smart_sync:
p = self.manifest.manifestProject
b = p.GetBranch(p.CurrentBranch)
branch = b.merge
if branch.startswith(R_HEADS):
branch = branch[len(R_HEADS):]
env = os.environ.copy()
if (env.has_key('TARGET_PRODUCT') and
env.has_key('TARGET_BUILD_VARIANT')):
target = '%s-%s' % (env['TARGET_PRODUCT'],
env['TARGET_BUILD_VARIANT'])
[success, manifest_str] = server.GetApprovedManifest(branch, target)
else:
[success, manifest_str] = server.GetApprovedManifest(branch)
else:
assert(opt.smart_tag)
[success, manifest_str] = server.GetManifest(opt.smart_tag)
if success:
manifest_name = "smart_sync_override.xml"
manifest_path = os.path.join(self.manifest.manifestProject.worktree,
manifest_name)
try:
f = open(manifest_path, 'w')
try:
f.write(manifest_str)
finally:
f.close()
except IOError:
print >>sys.stderr, 'error: cannot write manifest to %s' % \
manifest_path
sys.exit(1)
self.manifest.Override(manifest_name)
else:
print >>sys.stderr, 'error: %s' % manifest_str
sys.exit(1)
except socket.error:
print >>sys.stderr, 'error: cannot connect to manifest server %s' % (
self.manifest.manifest_server)
sys.exit(1)
rp = self.manifest.repoProject
rp.PreSync()
@ -71,41 +414,88 @@ the manifest.
mp.PreSync()
if opt.repo_upgraded:
for project in self.manifest.projects.values():
if project.Exists:
project.PostRepoUpgrade()
_PostRepoUpgrade(self.manifest)
all = self.GetProjects(args, missing_ok=True)
fetched = self._Fetch(rp, mp, *all)
if rp.HasChanges:
print >>sys.stderr, 'info: A new version of repo is available'
print >>sys.stderr, ''
if opt.no_repo_verify or _VerifyTag(rp):
if not rp.Sync_LocalHalf():
sys.exit(1)
print >>sys.stderr, 'info: Restarting repo with latest version'
raise RepoChangedException(['--repo-upgraded'])
else:
print >>sys.stderr, 'warning: Skipped upgrade to unverified version'
if not opt.local_only:
mp.Sync_NetworkHalf(quiet=opt.quiet,
current_branch_only=opt.current_branch_only)
if mp.HasChanges:
if not mp.Sync_LocalHalf():
syncbuf = SyncBuffer(mp.config)
mp.Sync_LocalHalf(syncbuf)
if not syncbuf.Finish():
sys.exit(1)
self.manifest._Unload()
all = self.GetProjects(args, missing_ok=True)
missing = []
for project in all:
if project.gitdir not in fetched:
missing.append(project)
self._Fetch(*missing)
if opt.jobs is None:
self.jobs = self.manifest.default.sync_j
all = self.GetProjects(args, missing_ok=True)
if not opt.local_only:
to_fetch = []
now = time.time()
if (24 * 60 * 60) <= (now - rp.LastFetch):
to_fetch.append(rp)
to_fetch.extend(all)
fetched = self._Fetch(to_fetch, opt)
_PostRepoFetch(rp, opt.no_repo_verify)
if opt.network_only:
# bail out now; the rest touches the working tree
return
self.manifest._Unload()
all = self.GetProjects(args, missing_ok=True)
missing = []
for project in all:
if project.gitdir not in fetched:
missing.append(project)
self._Fetch(missing, opt)
if self.manifest.IsMirror:
# bail out now, we have no working tree
return
if self.UpdateProjectList():
sys.exit(1)
syncbuf = SyncBuffer(mp.config,
detach_head = opt.detach_head)
pm = Progress('Syncing work tree', len(all))
for project in all:
pm.update()
if project.worktree:
if not project.Sync_LocalHalf():
sys.exit(1)
project.Sync_LocalHalf(syncbuf)
pm.end()
print >>sys.stderr
if not syncbuf.Finish():
sys.exit(1)
# If there's a notice that's supposed to print at the end of the sync, print
# it now...
if self.manifest.notice:
print self.manifest.notice
def _PostRepoUpgrade(manifest):
for project in manifest.projects.values():
if project.Exists:
project.PostRepoUpgrade()
def _PostRepoFetch(rp, no_repo_verify=False, verbose=False):
if rp.HasChanges:
print >>sys.stderr, 'info: A new version of repo is available'
print >>sys.stderr, ''
if no_repo_verify or _VerifyTag(rp):
syncbuf = SyncBuffer(rp.config)
rp.Sync_LocalHalf(syncbuf)
if not syncbuf.Finish():
sys.exit(1)
print >>sys.stderr, 'info: Restarting repo with latest version'
raise RepoChangedException(['--repo-upgraded'])
else:
print >>sys.stderr, 'warning: Skipped upgrade to unverified version'
else:
if verbose:
print >>sys.stderr, 'repo version %s is current' % rp.work_git.describe(HEAD)
def _VerifyTag(project):
gpg_dir = os.path.expanduser('~/.repoconfig/gnupg')
@ -115,17 +505,14 @@ def _VerifyTag(project):
warning: Cannot automatically authenticate repo."""
return True
remote = project.GetRemote(project.remote.name)
ref = remote.ToLocal(project.revision)
try:
cur = project.bare_git.describe(ref)
cur = project.bare_git.describe(project.GetRevisionId())
except GitError:
cur = None
if not cur \
or re.compile(r'^.*-[0-9]{1,}-g[0-9a-f]{1,}$').match(cur):
rev = project.revision
rev = project.revisionExpr
if rev.startswith(R_HEADS):
rev = rev[len(R_HEADS):]
@ -135,9 +522,9 @@ warning: Cannot automatically authenticate repo."""
% (project.name, rev)
return False
env = dict(os.environ)
env['GIT_DIR'] = project.gitdir
env['GNUPGHOME'] = gpg_dir
env = os.environ.copy()
env['GIT_DIR'] = project.gitdir.encode()
env['GNUPGHOME'] = gpg_dir.encode()
cmd = [GIT, 'tag', '-v', cur]
proc = subprocess.Popen(cmd,

View File

@ -13,12 +13,25 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import re
import sys
from command import InteractiveCommand
from editor import Editor
from error import UploadError
from error import HookError, UploadError
from project import RepoHook
UNUSUAL_COMMIT_THRESHOLD = 5
def _ConfirmManyUploads(multiple_branches=False):
if multiple_branches:
print "ATTENTION: One or more branches has an unusually high number of commits."
else:
print "ATTENTION: You are uploading an unusually high number of commits."
print "YOU PROBABLY DO NOT MEAN TO DO THIS. (Did you rebase across branches?)"
answer = raw_input("If you are sure you intend to do this, type 'yes': ").strip()
return answer == "yes"
def _die(fmt, *args):
msg = fmt % args
@ -35,68 +48,154 @@ class Upload(InteractiveCommand):
common = True
helpSummary = "Upload changes for code review"
helpUsage="""
%prog [--re --cc] {[<project>]... | --replace <project>}
%prog [--re --cc] [<project>]...
"""
helpDescription = """
The '%prog' command is used to send changes to the Gerrit code
review system. It searches for changes in local projects that do
not yet exist in the corresponding remote repository. If multiple
changes are found, '%prog' opens an editor to allow the
user to choose which change to upload. After a successful upload,
repo prints the URL for the change in the Gerrit code review system.
The '%prog' command is used to send changes to the Gerrit Code
Review system. It searches for topic branches in local projects
that have not yet been published for review. If multiple topic
branches are found, '%prog' opens an editor to allow the user to
select which branches to upload.
'%prog' searches for uploadable changes in all projects listed
at the command line. Projects can be specified either by name, or
by a relative or absolute path to the project's local directory. If
no projects are specified, '%prog' will search for uploadable
changes in all projects listed in the manifest.
'%prog' searches for uploadable changes in all projects listed at
the command line. Projects can be specified either by name, or by
a relative or absolute path to the project's local directory. If no
projects are specified, '%prog' will search for uploadable changes
in all projects listed in the manifest.
If the --reviewers or --cc options are passed, those emails are
added to the respective list of users, and emails are sent to any
new users. Users passed to --reviewers must be already registered
new users. Users passed as --reviewers must already be registered
with the code review system, or the upload will fail.
If the --replace option is passed the user can designate which
existing change(s) in Gerrit match up to the commits in the branch
being uploaded. For each matched pair of change,commit the commit
will be added as a new patch set, completely replacing the set of
files and description associated with the change in Gerrit.
Configuration
-------------
review.URL.autoupload:
To disable the "Upload ... (y/N)?" prompt, you can set a per-project
or global Git configuration option. If review.URL.autoupload is set
to "true" then repo will assume you always answer "y" at the prompt,
and will not prompt you further. If it is set to "false" then repo
will assume you always answer "n", and will abort.
review.URL.autocopy:
To automatically copy a user or mailing list to all uploaded reviews,
you can set a per-project or global Git option to do so. Specifically,
review.URL.autocopy can be set to a comma separated list of reviewers
who you always want copied on all uploads with a non-empty --re
argument.
review.URL.username:
Override the username used to connect to Gerrit Code Review.
By default the local part of the email address is used.
The URL must match the review URL listed in the manifest XML file,
or in the .git/config within the project. For example:
[remote "origin"]
url = git://git.example.com/project.git
review = http://review.example.com/
[review "http://review.example.com/"]
autoupload = true
autocopy = johndoe@company.com,my-team-alias@company.com
review.URL.uploadtopic:
To add a topic branch whenever uploading a commit, you can set a
per-project or global Git option to do so. If review.URL.uploadtopic
is set to "true" then repo will assume you always want the equivalent
of the -t option to the repo command. If unset or set to "false" then
repo will make use of only the command line option.
References
----------
Gerrit Code Review: http://code.google.com/p/gerrit/
"""
def _Options(self, p):
p.add_option('--replace',
dest='replace', action='store_true',
help='Upload replacement patchesets from this branch')
p.add_option('-t',
dest='auto_topic', action='store_true',
help='Send local branch name to Gerrit Code Review')
p.add_option('--re', '--reviewers',
type='string', action='append', dest='reviewers',
help='Request reviews from these people.')
p.add_option('--cc',
type='string', action='append', dest='cc',
help='Also send email to these email addresses.')
p.add_option('--br',
type='string', action='store', dest='branch',
help='Branch to upload.')
p.add_option('--cbr', '--current-branch',
dest='current_branch', action='store_true',
help='Upload current git branch.')
def _SingleBranch(self, branch, people):
# Options relating to upload hook. Note that verify and no-verify are NOT
# opposites of each other, which is why they store to different locations.
# We are using them to match 'git commit' syntax.
#
# Combinations:
# - no-verify=False, verify=False (DEFAULT):
# If stdout is a tty, can prompt about running upload hooks if needed.
# If user denies running hooks, the upload is cancelled. If stdout is
# not a tty and we would need to prompt about upload hooks, upload is
# cancelled.
# - no-verify=False, verify=True:
# Always run upload hooks with no prompt.
# - no-verify=True, verify=False:
# Never run upload hooks, but upload anyway (AKA bypass hooks).
# - no-verify=True, verify=True:
# Invalid
p.add_option('--no-verify',
dest='bypass_hooks', action='store_true',
help='Do not run the upload hook.')
p.add_option('--verify',
dest='allow_all_hooks', action='store_true',
help='Run the upload hook without prompting.')
def _SingleBranch(self, opt, branch, people):
project = branch.project
name = branch.name
date = branch.date
list = branch.commits
remote = project.GetBranch(name).remote
print 'Upload project %s/:' % project.relpath
print ' branch %s (%2d commit%s, %s):' % (
name,
len(list),
len(list) != 1 and 's' or '',
date)
for commit in list:
print ' %s' % commit
key = 'review.%s.autoupload' % remote.review
answer = project.config.GetBoolean(key)
sys.stdout.write('(y/n)? ')
answer = sys.stdin.readline().strip()
if answer in ('y', 'Y', 'yes', '1', 'true', 't'):
self._UploadAndReport([branch], people)
if answer is False:
_die("upload blocked by %s = false" % key)
if answer is None:
date = branch.date
list = branch.commits
print 'Upload project %s/ to remote branch %s:' % (project.relpath, project.revisionExpr)
print ' branch %s (%2d commit%s, %s):' % (
name,
len(list),
len(list) != 1 and 's' or '',
date)
for commit in list:
print ' %s' % commit
sys.stdout.write('to %s (y/N)? ' % remote.review)
answer = sys.stdin.readline().strip()
answer = answer in ('y', 'Y', 'yes', '1', 'true', 't')
if answer:
if len(branch.commits) > UNUSUAL_COMMIT_THRESHOLD:
answer = _ConfirmManyUploads()
if answer:
self._UploadAndReport(opt, [branch], people)
else:
_die("upload aborted by user")
def _MultipleBranches(self, pending, people):
def _MultipleBranches(self, opt, pending, people):
projects = {}
branches = {}
@ -114,11 +213,12 @@ files and description associated with the change in Gerrit.
if b:
script.append('#')
script.append('# branch %s (%2d commit%s, %s):' % (
script.append('# branch %s (%2d commit%s, %s) to remote branch %s:' % (
name,
len(list),
len(list) != 1 and 's' or '',
date))
date,
project.revisionExpr))
for commit in list:
script.append('# %s' % commit)
b[name] = branch
@ -127,6 +227,11 @@ files and description associated with the change in Gerrit.
branches[project.name] = b
script.append('')
script = [ x.encode('utf-8')
if issubclass(type(x), unicode)
else x
for x in script ]
script = Editor.EditString("\n".join(script)).split("\n")
project_re = re.compile(r'^#?\s*project\s*([^\s]+)/:$')
@ -155,62 +260,71 @@ files and description associated with the change in Gerrit.
todo.append(branch)
if not todo:
_die("nothing uncommented for upload")
self._UploadAndReport(todo, people)
def _ReplaceBranch(self, project, people):
branch = project.CurrentBranch
if not branch:
print >>sys.stdout, "no branches ready for upload"
return
branch = project.GetUploadableBranch(branch)
if not branch:
print >>sys.stdout, "no branches ready for upload"
return
many_commits = False
for branch in todo:
if len(branch.commits) > UNUSUAL_COMMIT_THRESHOLD:
many_commits = True
break
if many_commits:
if not _ConfirmManyUploads(multiple_branches=True):
_die("upload aborted by user")
script = []
script.append('# Replacing from branch %s' % branch.name)
for commit in branch.commits:
script.append('[ ] %s' % commit)
script.append('')
script.append('# Insert change numbers in the brackets to add a new patch set.')
script.append('# To create a new change record, leave the brackets empty.')
self._UploadAndReport(opt, todo, people)
script = Editor.EditString("\n".join(script)).split("\n")
def _AppendAutoCcList(self, branch, people):
"""
Appends the list of users in the CC list in the git project's config if a
non-empty reviewer list was found.
"""
change_re = re.compile(r'^\[\s*(\d{1,})\s*\]\s*([0-9a-f]{1,}) .*$')
to_replace = dict()
full_hashes = branch.unabbrev_commits
name = branch.name
project = branch.project
key = 'review.%s.autocopy' % project.GetBranch(name).remote.review
raw_list = project.config.GetString(key)
if not raw_list is None and len(people[0]) > 0:
people[1].extend([entry.strip() for entry in raw_list.split(',')])
for line in script:
m = change_re.match(line)
if m:
c = m.group(1)
f = m.group(2)
try:
f = full_hashes[f]
except KeyError:
print 'fh = %s' % full_hashes
print >>sys.stderr, "error: commit %s not found" % f
sys.exit(1)
if c in to_replace:
print >>sys.stderr,\
"error: change %s cannot accept multiple commits" % c
sys.exit(1)
to_replace[c] = f
def _FindGerritChange(self, branch):
last_pub = branch.project.WasPublished(branch.name)
if last_pub is None:
return ""
if not to_replace:
print >>sys.stderr, "error: no replacements specified"
print >>sys.stderr, " use 'repo upload' without --replace"
sys.exit(1)
refs = branch.GetPublishedRefs()
try:
# refs/changes/XYZ/N --> XYZ
return refs.get(last_pub).split('/')[-2]
except:
return ""
branch.replace_changes = to_replace
self._UploadAndReport([branch], people)
def _UploadAndReport(self, todo, people):
def _UploadAndReport(self, opt, todo, original_people):
have_errors = False
for branch in todo:
try:
branch.UploadForReview(people)
people = copy.deepcopy(original_people)
self._AppendAutoCcList(branch, people)
# Check if there are local changes that may have been forgotten
if branch.project.HasChanges():
key = 'review.%s.autoupload' % branch.project.remote.review
answer = branch.project.config.GetBoolean(key)
# if they want to auto upload, let's not ask because it could be automated
if answer is None:
sys.stdout.write('Uncommitted changes in ' + branch.project.name + ' (did you forget to amend?). Continue uploading? (y/N) ')
a = sys.stdin.readline().strip().lower()
if a not in ('y', 'yes', 't', 'true', 'on'):
print >>sys.stderr, "skipping upload"
branch.uploaded = False
branch.error = 'User aborted'
continue
# Check if topic branches should be sent to the server during upload
if opt.auto_topic is not True:
key = 'review.%s.uploadtopic' % branch.project.remote.review
opt.auto_topic = branch.project.config.GetBoolean(key)
branch.UploadForReview(people, auto_topic=opt.auto_topic)
branch.uploaded = True
except UploadError, e:
branch.error = e
@ -218,15 +332,19 @@ files and description associated with the change in Gerrit.
have_errors = True
print >>sys.stderr, ''
print >>sys.stderr, '--------------------------------------------'
print >>sys.stderr, '----------------------------------------------------------------------'
if have_errors:
for branch in todo:
if not branch.uploaded:
print >>sys.stderr, '[FAILED] %-15s %-15s (%s)' % (
if len(str(branch.error)) <= 30:
fmt = ' (%s)'
else:
fmt = '\n (%s)'
print >>sys.stderr, ('[FAILED] %-15s %-15s' + fmt) % (
branch.project.relpath + '/', \
branch.name, \
branch.error)
str(branch.error))
print >>sys.stderr, ''
for branch in todo:
@ -234,9 +352,6 @@ files and description associated with the change in Gerrit.
print >>sys.stderr, '[OK ] %-15s %s' % (
branch.project.relpath + '/',
branch.name)
print >>sys.stderr, '%s' % branch.tip_url
print >>sys.stderr, '(as %s)' % branch.owner_email
print >>sys.stderr, ''
if have_errors:
sys.exit(1)
@ -246,6 +361,29 @@ files and description associated with the change in Gerrit.
pending = []
reviewers = []
cc = []
branch = None
if opt.branch:
branch = opt.branch
for project in project_list:
if opt.current_branch:
cbr = project.CurrentBranch
avail = [project.GetUploadableBranch(cbr)] if cbr else None
else:
avail = project.GetUploadableBranches(branch)
if avail:
pending.append((project, avail))
if pending and (not opt.bypass_hooks):
hook = RepoHook('pre-upload', self.manifest.repo_hooks_project,
self.manifest.topdir, abort_if_user_denies=True)
pending_proj_names = [project.name for (project, avail) in pending]
try:
hook.Run(opt.allow_all_hooks, project_list=pending_proj_names)
except HookError, e:
print >>sys.stderr, "ERROR: %s" % str(e)
return
if opt.reviewers:
reviewers = _SplitEmails(opt.reviewers)
@ -253,22 +391,9 @@ files and description associated with the change in Gerrit.
cc = _SplitEmails(opt.cc)
people = (reviewers,cc)
if opt.replace:
if len(project_list) != 1:
print >>sys.stderr, \
'error: --replace requires exactly one project'
sys.exit(1)
self._ReplaceBranch(project_list[0], people)
return
for project in project_list:
avail = project.GetUploadableBranches()
if avail:
pending.append((project, avail))
if not pending:
print >>sys.stdout, "no branches ready for upload"
elif len(pending) == 1 and len(pending[0][1]) == 1:
self._SingleBranch(pending[0][1][0], people)
self._SingleBranch(opt, pending[0][1][0], people)
else:
self._MultipleBranches(pending, people)
self._MultipleBranches(opt, pending, people)

43
subcmds/version.py Normal file
View File

@ -0,0 +1,43 @@
#
# Copyright (C) 2009 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from command import Command, MirrorSafeCommand
from git_command import git
from project import HEAD
class Version(Command, MirrorSafeCommand):
wrapper_version = None
wrapper_path = None
common = False
helpSummary = "Display the version of repo"
helpUsage = """
%prog
"""
def Execute(self, opt, args):
rp = self.manifest.repoProject
rem = rp.GetRemote(rp.remote.name)
print 'repo version %s' % rp.work_git.describe(HEAD)
print ' (from %s)' % rem.url
if Version.wrapper_path is not None:
print 'repo launcher version %s' % Version.wrapper_version
print ' (from %s)' % Version.wrapper_path
print git.version().strip()
print 'Python %s' % sys.version

3
tests/fixtures/test.gitconfig vendored Normal file
View File

@ -0,0 +1,3 @@
[section]
empty
nonempty = true

52
tests/test_git_config.py Normal file
View File

@ -0,0 +1,52 @@
import os
import unittest
import git_config
def fixture(*paths):
"""Return a path relative to test/fixtures.
"""
return os.path.join(os.path.dirname(__file__), 'fixtures', *paths)
class GitConfigUnitTest(unittest.TestCase):
"""Tests the GitConfig class.
"""
def setUp(self):
"""Create a GitConfig object using the test.gitconfig fixture.
"""
config_fixture = fixture('test.gitconfig')
self.config = git_config.GitConfig(config_fixture)
def test_GetString_with_empty_config_values(self):
"""
Test config entries with no value.
[section]
empty
"""
val = self.config.GetString('section.empty')
self.assertEqual(val, None)
def test_GetString_with_true_value(self):
"""
Test config entries with a string value.
[section]
nonempty = true
"""
val = self.config.GetString('section.nonempty')
self.assertEqual(val, 'true')
def test_GetString_from_missing_file(self):
"""
Test missing config file
"""
config_fixture = fixture('not.present.gitconfig')
config = git_config.GitConfig(config_fixture)
val = config.GetString('empty')
self.assertEqual(val, None)
if __name__ == '__main__':
unittest.main()

View File

@ -13,13 +13,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
class Remote(object):
def __init__(self, name,
fetch=None,
review=None,
projectName=None):
self.name = name
self.fetchUrl = fetch
self.reviewUrl = review
self.projectName = projectName
self.requiredCommits = []
import sys
import os
REPO_TRACE = 'REPO_TRACE'
try:
_TRACE = os.environ[REPO_TRACE] == '1'
except KeyError:
_TRACE = False
def IsTrace():
return _TRACE
def SetTrace():
global _TRACE
_TRACE = True
def Trace(fmt, *args):
if IsTrace():
print >>sys.stderr, fmt % args