Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hello HN! I'm the founder of HashiCorp.

I'm excited to see Boundary here! I want to note a few things about Boundary, why we made it, why it is different than other solutions in the space, etc.

* Boundary is free and open source. Similar to when we built Vault, we feel like the solution-space for identity-based security is too commercialized. We want to provide access to this type of security to a broader set of people because we feel it's the right way to think about access control. Note: of course as as a company we plan on commercializing Boundary at some point, but we'll do this similarly to Vault, the major featureset of Boundary will remain free and open source forever.

* Dynamic resource catalogs. Other tools in this space usually require manually maintaining a catalog of servers, databases, applications, etc. We're integrating Boundary closely with Terraform, AWS/GCP/Azure, Kubernetes, etc. to give you live auto-updating catalogs based on tags. (Note: this feature is coming in 0.2, and not in this initial release, but is well planned at this point)

* Dynamic credentials. Existing tools often require static credentials. Boundary 0.1 uses static credentials, too, but we're already working on integrating Boundary with Vault and other systems to provide full end-to-end dynamic credentials. You authenticate with your identity, and instead of reusing the same credentials on the backend, we pull dynamic per-session credentials.

And more! Remember this is a 0.1 release. We have a lot of vision and roadmap laid out for this project and we are hard at work on that now. We're really excited about what's to come here.

Specifically, as a 0.1, Boundary focuses in on layer 3 connections (TCP) with minimal layer 7 awareness for protocols such as SSH. This will be expanded dramatically to support multiple DB protocols, Microsoft Remote Desktop, and more.

Also, we're releasing another new product tomorrow that is more developer-focused, if security is not your cup of tea. Stay tuned.

The Boundary team and I will be around the comments to answer any questions.



Happy Nomad + Consul + Terraform user here.

Thanks a lot for the great products, but please give us managed Nomad already. Or even better: a Heroku like app platform. I want to give you money, but I really dislike your companies' enterprise offerings.

BTW I believe there's a great opportunity for Hashicorp right now. Cloud providers are good at selling building blocks, but are terrible at selling a vision of how you should build your applications. On the other hand, low code / enterprise application platforms are a disgrace as always. IMO a coherent stack of managed Nomad + Consul + Vault could provide a solid middle ground for those who want to build apps without the burden of managing K8s or navigating through the incomprehensible maze of products offered by public clouds.


Hello! Thank you :)

(1) HCP Nomad is coming. We announced HCP Consul public beta and HCP Vault private beta today (on AWS, more clouds later). HCP Nomad is planned but not quite ready to talk about beyond that yet. That is "managed Nomad."

(2) Re: Heroku-like app platform. Watch tomorrow's keynote or catch up on our announcements tomorrow. It isn't this, but I think it'll give you an idea of the vision we're heading towards and that is relevant to this idea.


>Or even better: a Heroku like app platform.

So Hashiku? :)

Seriously Heroku seems to have stopped innovating. I wonder how much is Heroku worth now.


Argh. I already find it a nightmare to figure out how to combine hashicorp tools together. Now there's one more! ;)

E.g, if I want a Consul backed Vault, whilst using Vault to generate TLS certs or other creds for Consul. Especially if I want to run either/both of those services using Nomad, backed by Consul. Hopefully I wont have the option of authenticating against any of these services using Boundary. Especially if Boundary is backed by Consul.


Indeed. Our recommendation with Vault now is to use the built-in storage[1] to break that dependency. If you must use Consul, we recommend separate clusters.

One way we're simplifying this a lot for people is the introduction of our managed services[2][3]. We understand not everyone can use a managed service though!

Boundary will integrate fairly deeply with Consul/Vault but these integrations will be optional.

[1]: https://www.vaultproject.io/docs/configuration/storage/raft [2]: https://www.hashicorp.com/blog/hcp-consul-public-beta [3]: https://www.hashicorp.com/blog/vault-on-the-hashicorp-cloud-...


Thanks for the response. My comment was half in jest, but it has been a pain point for me.


This comment resonates with me so hard. Specifically TLS certs, private certificate authorities and Consul. Like I wanna run my PCA out of Vault (right?), but if using Consul as the backend how do I bootstrap? Sounds like the reply from Michael seems to suggest running the integrated backend, which I can get behind.


Yep, we use the integrated vault backend.

In our case, we use lets encrypt to get certificates for vault and then bootstrap a vault cluster with internal storage. Then you have vault and you can use terraform to configure a consul TLS backend.

And then there is a little hitch, because consul-template cannot easily create multiple files from a single vault API call, so you cannot use consul-template directly to create the necessary certificate files. We've written a small messy tool there. But once you have that, it's fairly straight forward to generate consul + nomad TLS certs for the trust and then you're set.


So I actually do this today, and I use Vault. This sounds weird, but I spin up a "bootstrap PKI" Vault that is local-only, and produces, e.g.: "consul.service.dc.consul" certs with the issuer labeled as "bootstrap PKI intermediate" or some such. I generate a full suite of these for everything in a space, get it all up and running, then there's a 2nd layer of automation where self-certs are issued.

That said, I'm moving to a central distributed Vault that is mostly going to exist as a PKI so I'll only really need to repeat this process once more! Going to be using the raft internal engine for this one, and spread it physically across the globe so performance is going to be pretty terrible by design, but it should be quite resilient!


Maybe you’re not using Terraform. I suspect that your problem is an insufficient usage of HCL.


All hail Hashi-stack!


What is used to secure/encrypt the connection between the clients and the workers?

I did a quick search in the GitHub repo for WireGuard and didn't get any results so I guess you aren't using it.



Thanks! That is exactly what I was looking for.


Do you have a video showing a demo of managing a fleet of servers? Does this also address machine-to-machine ssh key trusts? Do you have a contrib repo with existing ansible, chef, puppet scripts to build your cluster and also for deploying agents to machines?


Hi Mitchell: what's your competitive landscape with Boundary?

When I first looked at the product description, I thought I might be looking at a "zero-trust identity-aware-proxy" sort of thing, but as I read more I got more of the "privileged access management" vibe with more of a focus on controlling access to infrastructure for developers vs. applications for end users.


So I've been casually doing some research into this in the past and was just updating my list so here's what I have so far. If I have missed any, please let me know.

* Azure App Proxy

* Google IAP

* Amazon WorkLink

* Cloudflare Access

* Zscaler Private Access

* Duo Beyond

* Hashicorp Beyond



* PrivX by SSH.COM

We provide a lean PAM solution for multi-cloud infrastructure access.




I believe Teleport is SSH only.



I think there may be some overlap with Amazon Systems Manager too.


Google BeyondCorp?


IAP is Google’s concrete implementation/product, BeyondCorp is the overall philosophy (not a product)


I think BeyondCorp == IAP


https://smallstep.com/

One example. I have been testing smallstep, which puts IDP around ssh (with group management), and also includes a dynamic host catalog (hosts run an agent that phones home to your identity provider).

However, I am very excited about Boundary as it seems to be a much more comprehensive solution.


I hope this isn’t too big of a question but what do you see as the migration path towards these newer “zero trust” access control technologies for organizations that are all in on VPNs and are in a hybrid cloud position?


As you say, it's a big question. But one way to start is by integrating this _within your VPN_ such that network access + credentials alone are not enough. With Boundary you could do this by setting up firewalls on the end hosts to only allow ingress from Boundary worker nodes.

Eventually you can migrate towards Boundary nodes (or similar technologies) being the public ingress instead of a VPN endpoint.

(Edit: clarified that I meant firewalls on the end hosts, not on the VPN or elsewhere in the network.)


This is awesome, thanks for making this. Boundary seems like the missing open source building block to achieve Zero Trust.

Zero Trust means authenticating per application instead of per network. For more context see https://about.gitlab.com/blog/2019/04/01/evolution-of-zero-t...

Proxying connections as Boundary does seems like the most elegant solution to achieve this in a way that doesn't require modifying the application.


Over in another thread this was compared to Google's BeyondCorp. Can you comment and compare/contrast Boundary with the concepts of BeyondCorp?


Boundary can be viewed as an implementation of some of these ideas!


Is there a simple paper that explains how this works on a technical level? I have a hard time visualizing how a connection to a remote host would be set up if it runs through Boundary. Does "without requiring direct network access" mean Boundary works as a proxy? And how does Boundary enable the connection if the host does not have direct network access?


We don't have a white paper on this yet, but we have a white board video that explains both how it works conceptually as well as at a more technical level of deployment architecture and data flow. https://www.youtube.com/watch?v=tUMe7EsXYBQ&feature=emb_titl...


Armon, just wanted to say your whiteboard videos are excellent. And the clarity of thought demonstrated in them over the years has been a great ad for the products too. The low tech aspect also feels more human.

But I had a chuckle at the idea of you wheeling a whiteboard into your house (if that is where it is filmed).


This is a really nice video. I appreciate the patient walkthrough of the concepts and motivation.


Wonderful video, really clear!


By "direct network access" we mean between the client and the end host. The Boundary worker node (which proxies traffic) would need to be able to make a network connection to the end host, and the client in turn would need to be able to make a network connection to the worker node.

This indirection provides a way to keep your public and private (or even private and private) networks distinct to remove "being on the same network" as a sufficient credential for access. At the same time, it ensures that the traffic is only proxied if that particular session is authenticated.


I can see how that works for an internal network. How does this work for SaaS solutions that would normally be directly on the internet? Would they have to be "shielded" to be on a private network and somehow be "Boundary enabled"?

And could this be done in a way that is completely transparent to the user (without them having to start a connection to the worker first, and then make a connection to the desired service)?


Generally speaking this is designed for accessing your own systems, not the systems of a third party being consumed as a SaaS. That said, any such provider that allows you to restrict the set of IPs allowed to make calls to the service would operate in a Boundary-friendly mode.


It would be interesting if the networking model for the end targets could also be inverted, so that an agent (or something) on the end target could make an outbound connection to establish a reverse tunnel to the proxy that user connections could then be sent over.

The use case I'm thinking of is for IoT or robotics, where you have devices you want to manage being deployed into remote networks that you don't have much control over. It's really helpful in this situation if devices make outbound connections only, so that network operators don't have to configure their firewalls to port forward or set up a VPN.

Edit: clearer language


It seems like using WireGuard on the "end target" to automatically connect to (WireGuard on) the proxy would be an easy workaround.

I did basically the same thing years ago for remote console devices deployed inside various customer networks where I had little or no control over the network. At that time, I used OpenVPN to automatically connect back to our "VPN servers" -- providing access to the device even if it was behind two or three layers of NAT (which, unfortunately, wasn't uncommon!).


Second this!

CloudFlare Access allows this, using the cloudflared daemon, which acts as a reverse proxy. It essentially means the endpoint can be closed off to incoming connections from the internet, and you don't need to maintain various firewall whitelist (and hope they don't go out of sync)

Is something like this on the roadmap for Boundary?


Without committing to any specifics, I'll say that we are very aware of use-cases where a daemon on the end host can provide enhanced benefits.

As you can imagine we did quite a bit of research with our existing users/customers while working on the design of Boundary. One thing we heard almost universally was "please don't require us to install another agent on our boxes". So we decided to focus initially on transparent use cases that only require running additional nodes (Boundary controller/worker) without requiring additional software to be installed/secured/maintained on your end hosts.

> the endpoint can be closed off to incoming connections from the internet, and you don't need to maintain various firewall whitelist

If you think about this a bit differently, a Boundary worker is also acting as a reverse proxy gating access to your non-public network resources. You can definitely use Boundary right now to take resources you have on the public Internet, put them in a private-only subnet or security group, and then use a Boundary worker to gate access. It's simply a reverse proxy running on a different host rather than one running on the end host. You wouldn't _need_ to add a firewall to ensure that only Boundary workers can make incoming calls to the end hosts, it's simply defense in depth.


Thinking of this as a means for privileged access management, would it be possible for Boundary to gather artifacts (e.g. keystroke logs and/or screen shots) from the session?

This might trigger some folks but have you explored any options for delivering some or all of the Boundary infrastructure through serverless/faas?


Yes this is on the roadmap!


From a first look this is really exciting. And cool to see you here on HN! I live your positioning and how you’re first and foremost building FOSS software and tools that you leverage on, as opposed to building a commercial offering that you then release software for. It’s a vital distinction that sets you apart from eg Google.

Let’s say you have an org that’s doing the whole Consul/Nomad/Vault thing, and starting to have their Nomad jobs using Consul Connect (and it’s proxies/gateways for external).. that’s already a proxy sidecar used for all service ports. How does Boundary fit here? Is it put before/after Connect, is the plan to integrate them, or are they supposed to not be used together?


In an immediate sense you could have targets point to services handled by Connect, so you'd have client -> Boundary worker -> local Connect entrypoint -> end service.

We'll be looking more closely at other integration possibilities going forward!


Are there any plans or a way to use existing tools? By existing tools I mean winscp or any other tools that use a normal ssh client? RDP etc. I guess for shh and rdp you can just run the Boundary cli with a the predefined target in a terminal embedded into the UI (MremoteNG, MobaXterm etc) but tools like winscp are very much used for sftp file transfers.

A desktop client with a list of services/targets would also be great. Especially for the less technologically inclined individuals.

I know that people have their own opinions on port knocking but I find it as a good tool to remove a lot of noise, some pre built tool for that would be nice but could always just use fwknop-2


You can do this already, The `boundary connect ssh` stuff is just a convenience. You can spin up a local boundary proxy to anything and just connect anything that speaks TCP over it. This allows you to use all the tools you just named.

A desktop client is on the way, we already have an internal build of parts of it but it requires more work and didn't make it for 0.1.


Thanks for answering.

boundary proxy is an ok step but user experience should be streamlined especially if it's for teams and orgs and not just individuals who want to hack scripts but I full understand it's a 0.1 release.

Another thing I couldn't find in the docs is support for multiple installations, let's say I have different vpcs (In different accounts) or I have one on-prem installation and one in a cloud how do I login/switch/configure the cli to work seamlessly with multiple controllers.


We don't have something natively, but you can control the address via BOUNDARY_ADDR env var or the -addr flag per-call, and you can use -token-name with the CLI to switch between named tokens, which can be sourced from different accounts. Together it'd be pretty easy to write a shell alias to do what you're looking for.


Given dynamic resource catalogs and dynamic credentials, any plans to integrate dynamic policy engines, such as Open Policy Agent? https://www.openpolicyagent.org


Yep. This is a little bit further out on the roadmap but yes, we plan on integrating dynamic policy engines.


Hey Mitchell, congrats on the new announcements, great stuff! Out of curiosity, how are you building and operate HCP? Are you running it on top of Kubernetes or Nomad, or you're doing some other custom stuff?


    - Full HashiCorp stack (Nomad, Consul, Vault, Terraform)
    - Cadence (https://temporal.io/)
    - Microservice architecture over gRPC and Consul Connect
    - All services written in Go
    - Customer clusters are created/managed by programmatically running Terraform using just-in-time cloud credentials from Vault
    - All internal TLS certs for customer clusters dynamically created using Vault
    - All external TLS certs for customer clusters dynamically created using LetsEncrypt via Terraform
    - Frontend is Ember


> Customer clusters are created/managed by programmatically running Terraform

I have soooo many questions about best practices doing this. I run a service that needs to dynamically provision AWS resources, and lacking a clear path to do this programmatically, I shell out to Terraform.

* I assume you aren't shelling out :). Do you have any additional helper libraries on top of the Terraform code base to make it more of a a programmatically consumable API, as apposed to an end user application?

* Are you still pointing at a directory with resources defined in HCL, or are the resources defined programmatically?

* What are you using for state storage?

* What is the execution environment for the programmatic Terraform process? Since Terraform uses external processes for plugins, I've hit some issues with resource constraints around the max number of process sysctl's in containerized environment where I have multiple Terraform processes running in the same container.

edit: formatting


Yeah this isn't very easy to get right at the moment so there is not going to be any silver bullet here. We had to iterate on our runner a lot to get this right, but we have a lot of experience since we do this for Terraform Cloud too.

Answering your questions:

> * I assume you aren't shelling out :). Do you have any additional helper libraries on top of the Terraform code base to make it more of a a programmatically consumable API, as apposed to an end user application?

We in fact are. There are lots of security concerns you have to consider with this. We published a library to make this easier: https://github.com/hashicorp/terraform-exec

> * Are you still pointing at a directory with resources defined in HCL, or are the resources defined programmatically?

HCL mixed with the JSON flavor of HCL for programmatically generated stuff. Variables in JSON format also programmatically generated.

> * What are you using for state storage?

We output it to a file and handle this in an HCP microservice. We encrypt it using the customer-specific key with Vault and store it in a bucket that only the customer-specific credential has access to. If there is an RCE exploit somehow in our workflows, they can only access that customer's metadata.

> * What is the execution environment for the programmatic Terraform process? Since Terraform uses external processes for plugins, I've hit some issues with resource constraints around the max number of process sysctl's in containerized environment where I have multiple Terraform processes running in the same container.

Containers in HCP and VMs in Terraform Cloud due to increased isolation requirements. HCP has less strict requirements because the Terraform configs and inputs are more tightly controlled.


>> * I assume you aren't shelling out :)...

>>

> We in fact are.

Words can not express the joy I feel in reading this. Thanks so much for the responses!


Just for clarification, is "Cadence" a thing you built with Temporal? I see nothing on that site called "Cadence".


Temporal is the fork of Cadence by the original creators.It is still open source under MIT license.

I'm former creator and tech lead of Cadence and currently tech lead of Temporal.


Looks interesting! Couple of things:

1. It's not clear to me how you actually secure the targets? Do you just enable access to the IP address of the controller proxy? In the video you mention a gateway but there's no description of that in the docs?

2. Is it possible to proxy a web browser session? Or is it limited to individual requests via something like curl at the moment?


Do you think there will be any synergy or potential interaction with consul connect at some point?


Absolutely, 100%. This is already well discussed internally. :)


Looks great! A couple of questions:

Can you view logs of SSH sessions after the fact?

Can you live-view a session?

Can you require a pairing authorization like with https://github.com/square/sudo_pair?


All of the above is on the roadmap.

Our initial focus is on making the connections easy. We have some work to do there still. We'll then move on to more management features like this. They're both super important but from an initial adoption perspective we feel the latter is moot if the former (connections) don't work easily.


Makes sense. You should integrate TailScale too, so you don't need to shunt traffic through the boundary nodes


Would it not be easier to replace SSH with something more modern that actually exposes that as a feature?

I've been thinking about that the past couple of years, with today's building blocks an SSH alternative is so easy to build. I bet if you guys were to build or back such a system it would be the right quality and get the adoption it needs.

Opaque SSH sessions are such a thorn in my side.


mitchellh

I'm sorry, but please cut the corporate-speak.

Reality is that your statements are different from your actions.

"similarly to Vault, the major featureset of Boundary will remain free"

Sounds great doesn't it.

Except Hashicorp decide to hide Multi-factor authentication in Vault behind the paywall.

I mean, I'll forgive you putting a lot of the Vault features behind the paywall (e.g. replication).

But for a security product. Putting a core component of 21st century security (MFA) behind the paywall ?

Pretty unforgivable.


> * Boundary is free and open source. Similar to when we built Vault, we feel like the solution-space for identity-based security is too commercialized. We want to provide access to this type of security to a broader set of people because we feel it's the right way to think about access control. Note: of course as as a company we plan on commercializing Boundary at some point, but we'll do this similarly to Vault, the major featureset of Boundary will remain free and open source forever.

I hate this corporate speak. You're breaking into the space by giving away (basic, as you will commercialize any advanced) features under the guise of open source altruism. The products HashiCorp sells are open core, and you should be more honest about it (GitLab is!). I wish you operated more like other, real, open source companies that use subscriptions or managed service offerings and don't lock features behind various obscure pricing tiers. This is Shareware 2.0.

The difference between what HashiCorp does and what a real open source company like Rancher does is stark: HashiCorp has products, Rancher builds communities. Contributors to HashiCorps stuff have to play in a very specific sandbox, lest they implement lucrative features. Contributors to Rancher help the community at large and have full visibility into the codebase, empowering them to fix or add functionality without restrictions.


I'm sorry, I'm not trying to use any doublespeak here.

Boundary is free and open source. There is no corporate speak here. It is FOSS licensed (MPL2) and everything announced today is completely FOSS.

We do sell open core software and if there is any place where you feel we aren't being honest about that please let me know and I'll work to address that. I added that "NOTE" at the end of the point specifically to ensure I was being honest and show I wasn't trying to hide anything.

We are also starting to offer managed services for folks who prefer to consume our software that way. The managed service offerings do unlock the typically enterprise features. Example: https://www.hashicorp.com/blog/hcp-consul-public-beta


> I wish you operated more like other, real, open source companies that use subscriptions or managed service offerings and don't lock features behind various obscure pricing tiers.

"I want all of the functionality I want without having to pay for it." I hate how discussions around software businesses so often descend into purity tests around how much a company chooses to give away. Software is indeed eating the world, but the eternal battle of who has to pay for the underlying tools of said software continues.


The problem is not not wanting to pay for software. Hashicorp enterprise products have very interesting features which the open source code is lacking (e.g. nomad namespacing) but they are insanely expensive so you are forced to use the open source versions as the enterprise versions are targeted at fortune x companies.


How is this corporate speak? If an indie dev said his/her project is going to be open source initially and then newer features would get monetized, would your first thought be that this dev is "breaking into the space under the guise of open source altruism"?


If they started out by misleadingly[0] describing it as "$THING is free and open source."? Yes!

Edit: 0: It's (presumably) technically not false now, but the implication is that $THING is honestly intended to be FOSS, immediately followed by admiting that their actual intent is to sabotage that embrace-extend-extinguish-style as soon as it's commercially expedient to do so.


> their actual intent is to sabotage that as soon as it’s commercially expedient to do so.

Sabotage??? Wow, that’s quite an accusation for a company that’s, you know, a company. You might have an argument if they kept quiet about plans to monetize the product later, but that allegation is laughable.

If you’re not comfortable with the terms, don’t use the product. They’re being upfront about their plans. This anti-commercial position is hypocritical.


It's understandable the issue brought up, but the history of the company we are talking about (and not just generalize!) must be considered.

Is HashiCorp known to do this?

All I've heard are good things about HashiCorp from people who use HashiCorp products.

Second, it can't be forgotten these are companies. A company exists to create value for itself in some way.

It's the natural behavior of any company.

However in my opinion, "open core" design seems to be very very preferable amongst technologists (myself included). Essentially we are paying for additional features which normally we'd wait years from a sole contributor.


Some people felt burned by Vault where it looked like the free version could be used in production but it couldn't and then the enterprise version is very expensive.


> it looks like the free version can be used in production

I think you might be confusing vault with another product?

We self-host vault in production, and it doesn't cost us a dime.

(other than the engineers we pay internally to operate it, of course)


Err what? Vault can absolutely be used in production for free. If you want the enterprise features, then you pay.


Why can't the free version of Vault be used in production?


Production-worthiness depends on your needs. The free edition is perfectly good for most people, however there are several features and modules that are only available in the Enterprise Edition. Notably, some of the disaster recovery, scaleout, and multifactor authentication features cost extra.

ref: https://www.hashicorp.com/products/vault/pricing


I think the problem was that auto-unseal wasn't free (it is now, so kudos to HashiCorp for listening).


> Is HashiCorp known to do this?

HashiCorp and other companies doing "devops" tools are known for using "open core" and hijacking the spirit of open source in many ways.


Man, this really represents the rift in Open Source and Corporate development right now. It seems like there are developers who contribute to Open Source because they like the mission, the impact, and the values. In contrast, there are others who contribute to open source because their job requires or mandates it. Then there's people who have a mix of both.

All three have wildly different values and historically corporations aren't very good at listening to anyone that isn't waving a check. They use reasoning like "priorities" to close source formerly open source projects, bend project values to reflect their own values, and wedge projects with funding in exchange for representation or control. Corporate controlled and born projects are often used as marketing or for good PR, a cursory browsing of a company's Twitter page will show how they utilize it for this type of end.

I don't really read Mitchell's speak as corporate or double speak, but I do think that referring to HashiCorp (and other) projects as "open source" is a half truth. The line that I draw here is that I don't think Mitchell is lying, rather, I think that open source is now an umbrella term that means very little and really terms like open core, free and open source software, etc are more concise. We owe that outcome to inviting our corporate friends into the fold of open source with not enough restrictions, tracking, and accountability but there's a piece of me that feels this outcome was largely intentional because it's become a means to an end as I described above. These could just be feelings but the situation is common enough that it's relatable.

I'd encourage corporations to be more transparent in their verbiage, their investments, and their representation in these projects so that it doesn't continue to confuse people who participate in and enjoy the "free" side of open source. When I look at an open source project I'd love to know if a majority of the maintainers or funding comes from a corporation. If those things are true, then as someone who highly believes in the ideals of free software I may want to stay far away from people who are susceptible to corporate influence and values. On the other hand, that increased transparency may help clear the air and prevent issues from being perceived as non-transparent or outright misrepresentation.


> that referring to HashiCorp (and other) projects as "open source" is a half truth

Spot on. Corporate "open source" is often open only in terms of licensing, but not in terms of values.

Many companies use tricks to prevent successful forks and keep tight control over the development process.


I can tell you both from an inner source and open source standpoint, executives (more than engineers it seems, but that could just be my friends) have an outright fear of forks.


so?


I think it's not a fair thing to say. HashiCorp's projects are using MPL 2.0, and please correct me if I'm wrong (IANAL!) it would allow you to create an open source fork of say consul, call it OpenConsul and continue development there. That this hasn't happened yet (or if it did, it never gained any traction) is a testament to HashiCorp being a responsible custodian of its projects and their respective communities.


There are folks who would loathe subscriptions or managed services just as equally, I hope you realize that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: