Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wish gpg had a decent cli/"api"

At work, we use it to store shared secrets. We can encrypt a file that can be decrypted by multiple keys.

It's a bit hard to remember all the commands so I made a webUI to manage everything.

The feature I like the most is that I can list which files you can access and which files you cannot.

The thing is, to do that, I need to pass tons of obscure and magic nonsense parameters IE:

    gpg --list-only --no-default-keyring --secret-keyring /dev/null ops.gpg
and then parse the completely un-parsable output to know which keys can decrypt the file.

So far so good. The problem is that; this magic trick only work with gpg 2.0.30 or lower. If you have the latest version, you can see every keys that can decrypt a file... except yours. There is no way to know if "You" can decrypt a file anymore ! (how great is this)

I now have to tell people that if they want to use the nice UI they cannot have the latest version of gpg, which is troubling.

I can't believe that in 27 years, it is still impossible to know which keys can decrypt a file. Or even just have some parsable output instead of the pile of crap the gpg tool can vomit out.

So I'm pretty excited about Sequoia :)



Have you had a look at pass¹? It's a bash script that uses gnupg to encrypt secrets as gpg-encrypted plain text files. It has a very nice feature that allows you to specify a list of keys for all secrets in a specific directory (using a .gpg-id file).

Once set up, accessing the secrets is a matter of using the pass command line tool:

    pass edit some/secret
    # Copy the first line of a secret; by convention 
    # this is meant for a password:
    pass -c some/secret
    # Show the whole secret file:
    pass some/secret

    # Combined with git:
    pass git pull
    # Generate a 24-character random password:
    pass generate some/othersecret 24
    pass git push
We use pass to maintain a set of shared secrets with a small team. The (encrypted) files are pushed to private git repository (pass supports this out of the box).

1: https://www.passwordstore.org/


I wrote pass, originally just as just a dumb bash script that I was using privately, but then I put it on the Internet, and so all of the sudden there was a requirement to _not be awful_. The experience has been pretty frustrating, for precisely the reasons pointed out by GP: the gpg command line interface is atrocious. I'm required to parse things in a million crazy ways, buffer data myself, and work around all sorts of weird behaviors. All of that headache, and then in the end all I get is lame PGP crypto out of it? As I said, frustrating.

On the flip-side, at least we've (partially?) succeeded in taming the beast, and the end result is something moderately usable that you happily recommend to folks on HN. So that's good I suppose. :)


An alternative is to use the GPGME library, which does all the ugly gpg output parsing for you. I realize, though, that a C library is not a solution for everything, especially not for a shell script that you want to keep a shell script. :)


Well... There's always https://github.com/taviso/ctypes.sh ...


Holy crap, that's so cool. :) Thanks!


I've been using pass for a year or two now. It's a great tool. Thank you for making it.


I'll also chime in to say that `pass` is a fantastic tool, and I'm so glad I switched to it as my password manager. So the effort you went through to get there (which I can sympathize with since I've had to use the OpenPGP CLI directly plenty of times) is very much appreciated.


Pass is amazing, thank you!


Out of interest: How big of a threat is keeping diff files/versioned encrypted containers around when it comes to preventing breaking the encryption? I could imagine that the additional information will reduce the security of the information.


The only problem I can foresee (assuming that the encryption scheme itself has no weaknesses to things like known plaintext attacks) is that it makes it harder to retire an old, potentially compromised key. You need to expunge the git history and any copies.


Makes sense. Thank you.


can you elaborate or point me to a resource how to use it to share secret for projects? i tried the pass, but it sotres the credentials in my home, i would rather prefer to have them in a file in the project, moreover, can also it encrypt .env files or some sort of it?


By default it does store passwords in ~/.password-store but you can override that with environment variables (see PASSWORD_STORE_DIR in the man page). I personally use thin wrapper scripts to change pass's behavior to suit my need. You can even fork it directly (and cautiously) if you want, it's just a relatively straightforward shell script after all.

>can also it encrypt .env files or some sort of it?

What are .env files? You mean the config dotfiles in your home directory? If so you'll probably have to use something like EncFS to encrypt these files. Personally I don't encrypt them but I also avoid storing cleartext passwords in them as much as possible, many unix programs support getting passwords from an application, for example in my muttrc I have:

    set imap_pass = `pass mail/myemail`


I found this brilliant way to manage your dotfiles in an old hn comment. https://news.ycombinator.com/item?id=11071754

<quote>

I use:

    git init --bare $HOME/.myconf
    alias config='/usr/bin/git --git-dir=$HOME/.myconf/ --work-tree=$HOME'
    config config status.showUntrackedFiles no
where my ~/.myconf directory is a git bare repository. Then any file within the home folder can be versioned with normal commands like:

    config status
    config add .vimrc
    config commit -m "Add vimrc"
    config add .config/redshift.conf
    config commit -m "Add redshift config"
    config push
And so one…

No extra tooling, no symlinks, files are tracked on a version control system, you can use different branches for different computers, you can replicate you configuration easily on new installation.

</quote>


No extra tooling, no symlinks, files are tracked on a version control system, you can use different branches for different computers, you can replicate you configuration easily on new installation.

But synchronizing shared configuration is clunky (you have to cherry pick commits between branches I guess).

I use NixOS and Nix on my MacBook, which allows you to store and version your whole system configuration. I have factored out different parts of my configuration (emacs, zsh, etc.) in different .nix files. So, I just have one file per machine where I import the relevant configurations and specify the packages that I want to have available. E.g. this is my user configuration on NixOS:

https://github.com/danieldk/nix-home/blob/master/machines/mi...

and macOS:

https://github.com/danieldk/nix-home/blob/master/machines/ma...


Thanks for sharing those, I had kind of written off Nix for personal laptop after first glance, going to play around again.


This is brilliant! I think I'm about to go and replace some make-based infrastructure as a result.


the software requires configurations. those are read from a .env file where there's name=value, i was wondering if this tool can help in enc/dec that, so on so for i've a gpg --symmmetric script to do that, but you have to know the password


For project-specific secrets, you may want to look at git-crypt:

https://www.agwa.name/projects/git-crypt/


> then parse the completely un-parsable output to know which keys can decrypt the file.

This reminds me so much about a tip in Effective Java .. 'Always provide an option for users to access every relevant part of your object. If not people will start to parse your toString output and you will have created an inofficial API that you have to support whether you want or not' (paraphrased from my faulty memory). That programmers make that mistake again and again is just sad. :(


Hyrum's Law

With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.

http://www.hyrumslaw.com/

I'm in a team where all the consumers are only other internal teams and yet this still happens. I've found that it doesn't matter if you explicitly state to not rely on a particular behavior in your documentation, clients still will. It's always your fault if you break it because "it was your change that broke this client" and "it was working fine yesterday".


Some people may depend on them, but the difference between API guarantees and implementation details is that developers are far less reluctant to break your legs... I mean your application if you depend on the latter.

For example in java the iteration order of hash maps or the behavior of sorts in presence of non-reflexive comparators changed and people did depend on that. Sun was able to change it because it was not part of the API contract.


Man, depending on sort order with a non-reflexive comparator is a TERRIBLE idea. Almost on the level of https://xkcd.com/1172/


GPG is largely just this toString output parsing.

Which has been the source of a number of bugs in various GPG clients in the past (a few even on the HN frontpage for a few moments) and probably will into the future until GPG is no longer used.

tbh, GPG should just provide an RPC option so applications can securely pass data back and forth and receive proper error messages and codes. But I bet this won't happen because GNU fears that sort of things considering they won't split up the GCC compiler.


Well there is GPGME: https://gnupg.org/software/gpgme/index.html

But yes I mostly parse colon separated text.


> But I bet this won't happen because GNU fears that sort of things considering they won't split up the GCC compiler.

That's unfounded, there are plenty of GNU projects that have an API.


It's not unfounded. The GCC compiler isn't splitting up because they fear someone would build a proprietary compiler using either backend or frontend API if they did it.

I can see that the same reasoning here would be valid; someone could write a proprietary GPG frontend.


You have one example of a project not providing an API, however there are tons of other GNU projects which do.

Furthermore the GCC decision is well documented in mailing list posts, have you ever seen anyone involved in GPG development claim that they won't allow a library/frontend split for fear of someone writing a proprietary frontend?


This is what I love about Python. Every method is public, but methods that start with an underscore mean "use at your own risk". It suggests that the programmer use a preferred way, without strictly requiring it. Private methods treat programmers like infants.


IMO there's a difference between "use at your own risk" and "this is actually not meant to be called from the outside and if it is it will violate some invariant in the code". Those are two different concepts and it makes sense to distinguish them. Of course in high level highly managed languages like python it makes sense that the difference between the two can be rather fuzzy at times but in low level code there's often a clear difference.

Take for instance a socket class in C++, you might have a private method that deals with the low level details of the libc's socket calls. It's called at construction time and never later, and calling it would cause the current socket to be replaced by the new one (leaking the fd in the process) because it's only meant to be used at init time. Clearly it's not "use at your own risk", it's "code that calls this from the outside is fundamentally broken". Having the compiler enforce this invariant is a useful feature.

Meanwhile you can also have "use at your own risk" methods, for instance "get_raw_fd" if you want to be able to access the underlying socket. It makes it easy to break things but has legitimate use cases.

Of course you could say that you could just tag these private in a certain way and let coder discipline do the rest, but then again you could say that of pretty much all static validation (which, I suppose, makes sense if you like very dynamic languages like Python).


> IMO there's a difference between "use at your own risk" and "this is actually not meant to be called from the outside and if it is it will violate some invariant in the code". Those are two different concepts and it makes sense to distinguish them.

That's what double underscores are for:

  $ python3
  Python 3.5.2 (default, Nov 23 2017, 16:37:01) 
  [GCC 5.4.0 20160609] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> class Foo:
  ...     def foo(self):
  ...         print("I'm a public method.")
  ...     def _foo(self):
  ...         print("I'm a private method.")
  ...     def __foo(self):
  ...         print("I'm so private that you have to really know what you're doing to even call me.")
  ... 
  >>> foo = Foo()
  >>> foo.foo()
  I'm a public method.
  >>> foo._foo()
  I'm a private method.
  >>> foo.__foo()
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  AttributeError: 'Foo' object has no attribute '__foo'
  >>> foo._Foo__foo()
  I'm so private that you have to really know what you're doing to even call me.
  >>> 
Now in theory it's not quite what you're talking about because it does rely on coder discipline to some extent, but in practice I've never seen it become an issue (although arguably that could be because not many people know that you can access double underscore variable from outside a class).


Where did you get that idea from? Double underscores have to do with name mangling, nothing else https://stackoverflow.com/questions/1301346/what-is-the-mean...


It's just how I've always seen it used.


IME if you allow someone to use your private API you can be sure they will do at some point. And then they will be unhappy when you change it. Sure, you can go with "I said it was private, your problem", but that is the road to zero users.


So much this. I've seen it happen time and time again.

At least in Java, the reflection API serves as a "break glass" barrier to make sure folks understand they're doing something they shouldn't. Someone will still do it, and they'll still blame you when their app breaks later... but I like to believe it at least scares some folks away.


Honestly, I don't see people getting unhappy at breaking private API functionality. Most grab a library when they need and never upgrade it. They only do so when there's a necessary security risk or some wiz-bang new feature they need.

The common example: programmer grabs a library to solve a problem; a few weeks later they realize it doesn't really solve the problem and requires modifications to the library to do some custom job; they make the modifications. That's normally the end of the story for several years. No point in upgrading the library unless it really solves a business problem.

That constitutes the vast majority of developers.

Of course there are other organizations that always want the latest and greatest version and are constantly upgrading. To them, I'd say sure, stick to the public methods, otherwise you're creating a big headache down the road. But I wouldn't require it of them.


Closed source dependencies treat programmers as infants. With open source dependencies, private methods are just an organization tool, not an actual limitation on the programmer.


If a programmer cannot access some variable or function because another developer thought that they "shouldn't", how is that not treating them like infants?


I think the API is the GPGME library?

https://gnupg.org/software/gpgme/index.html

... which internally calls gnpg using the --with-colons argument which is how you're supposed to get machine-readable output.

That said, maybe your use case isn't very accessible via the standard gpg commands. It sounds like you want to parse the ciphertext and extract the recipient keyids? You can do that with gpg --list-packets, but this is really intended for debugging, not automated consumption. That said it doesn't look too hard to parse, but who knows how stable it will be in the future.


(for example)

gpg --list-only --list-packets foo | gawk '/^:pubkey\>/ { print $9 }'


I have a command-line wrapper around gpg for similar reasons - https://dotat.at/prog/regpg/

The magic you are missing is `--quiet --status-fd 1`, then look for ENC_TO lines - the documentation can be found in https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;...

Based on my testing this works with gnupg 2.1 and 2.2, so I think it should help with your problem.


Yes, more people should know about the --status-fd option. I only wish there were a version that gave JSON output or something a little more foolproof to parse, the record syntax is easy to analyze with sed/awk/etc but you have to look up the docs for the meaning of each field and I wonder about the quoting sometimes.


Oh god this. GPG's commandline is so inconsistent and frustrating to use. Plus it has all of these concepts that hardly anybody cares about in real life. Like the "trust level" of a key in the system. There are 5 (maybe 6) trust levels and it would take someone really paranoid to use most of them. Of course imported keys default to the lowest (most useless) level so you have to change it, which is a pain in the ass to do from the commandline (you have to write an expect script or do a wholesale import of a "trust database").

And then there was Ubuntu 16 that couldn't seem to import private keys at all or must have required some kind of super secret commandline option to allow it.

Honestly at this point I've been waiting for the inevitable article about how GPG has been maintained by one starving guy in his basement for the past 20 years and it turns out it's a total mess and nobody noticed because nobody was looking. Basically OpenSSL all over again.


> Honestly at this point I've been waiting for the inevitable article about how GPG has been maintained by one starving guy in his basement for the past 20 years ...

Perhaps you missed it in 2015:

https://www.propublica.org/article/the-worlds-email-encrypti...

The current funding page is at https://gnupg.org/donate/


New GnuPG can use TOFU trust model that simplifies this flow (an example here: https://www.kernel.org/category/signatures.html#using-the-we... ).

> Of course imported keys default to the lowest (most useless) level so you have to change it

You don't need to adjust trust levels, just sign the key locally (lsign) and it'll be valid. (there is a difference between key validity and trust, check out this excellent resource https://www.linux.com/learn/pgp-web-trust-core-concepts-behi... ).


That article makes the Web of Trust seem like an even bigger mistake. Once you have more than handful of keys in the system the interactions become complex, too complex for good security IMHO.


Does it?

For a key to be considered valid it must be either ultimately trusted, signed by ultimately trusted key or one fully trusted valid key or 3 marginally trusted valid keys.

It seems to me there are only two properties to track and quite easy calculation to do (fizz buzz level).


That's already three conditions, and those spider up to the conditions above.

It's like ACLs. On the surface they seem so easy, but in practice they're a nightmare to manage because you have to track so many dependencies to figure out a simple yes/no question.


There is one assigned property (trust) one relations table (signing key, signed key) and one computed property (validity, the output) and one condition.

I'd be very happy to see a simpler solution to the problem of decentralized authenticity.

Some projects do rely on Web of Trust, for example Linux kernel (https://www.kernel.org/signature.html#kernel-org-web-of-trus...) or Arch Linux (https://www.archlinux.org/master-keys/).


Looking at the `sq` command line client doesn't instill hope to me that it is going to be friendly: https://docs.sequoia-pgp.org/sq/index.html But if it's at least going to be consistent that would already be a big win.


What would make a friendly interface in your opinion?

The sq frontend uses git style subcommands to clearly separate actions from options. This is something that gpg doesn't do too well. For instance, commands (e.g., -e) look like options (e.g., -r). If no command is given, gpg tries to guess what you meant, which is perhaps good for users, but bad for programmers. And if an option isn't relevant to a command, it is often just ignored, which again, is perhaps reasonable for users, but bad for programmers.


I have nothing against the git style subcommands, in fact I think they are great. A friendly interface would in my opinion be from the perspective of an end user. In the case if GPG the end user can have a couple of goals in mind, to encrypt, decrypt or sign data. Although you could do a lot more things, for most users those would be secondary goals. I guess the average user doesn't necessarily know about autocrypt, ASCII armor, OpenPGP packets etc. Those users would have to guess whether they need to (do I need autocrypt? Why isn't it a default?). To be honest I don't think the usage output is very bad in its current form, but as a start for something that will evolve over the years I am not so sure.


Now it needs a --output=json option to make it awesome and usable from scripts and programs.


We've been considering this, but we'd rather have people use the library. We already have the start of a Python interface, which is pretty easy to use, IMHO.

But, I suspect that some people will insist on a shell script. So, we'll probably go this route sooner rather than later to avoid developers trying to parse the output of sq in an ad-hoc manner.


I'm sure you've already considered all the possible solutions, but why don't you run gpg in a subprocess as a different user, so it can list "Your" key (unless by "you" you mean "the user that originally encrypted the file")...?


EnvKey[1] might be interesting to you (disclaimer - I'm the founder). It's similar in principle to the homegrown system you've created, but has had a lot of time put into smoothing out the ux. It gives you a single place to manage configuration/secrets for all your projects.

It uses OpenPGP.js (maintained by ProtonMail) and golang's crypto/x/openpgp instead of gpg.

1 - https://www.envkey.com


Have you looked at Keybase? It offers a wrapper around the GPG commands. I know it can be used to encrypt/decrypt and sign stuff, but I haven't looked at it in detail so I don't know if it will suffice, but it might. `keybase help pgp` will list the commands. I'm also not sure if it's usable if you don't have a Keybase account (though those are free).


Are you adding your own identity to the list of recipients?

This works for gpg 2.1.18

gpg -u ${MYHEX} --batch --passphrase-file /path/to/some/hard-coded-passphrase --compress-level 1 --cipher-algo AES256 --sign --encrypt -r ${MYHEX} -r ${RECP1} -r ${RECP2} -r ${RECPn} -o ${OUTPUTFILE}.gpg ${INPUTFILE}


Or, you know, you can contribute JSON output to GPG. I mean, opensource and stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: