Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My belief is that we've been slowly building up to using general purpose languages, one small step at a time, throughout the infrastructure as code, DevOps, and SRE journeys this past 10 years. INI files, XML, JSON, and YAML aren't sufficiently expressive -- lacking for loops, conditionals, variable references, and any sort of abstraction -- so, of course, we add templates to it. But as the author (IMHO rightfully) points out, we just end up with a funky, poor approximation of a language.

I think this approach is a byproduct of thinking about infrastructure and configuration -- and the cloud generally -- as an "afterthought," not a core part of an application's infrastructure. Containers, Kubernetes, serverless, and more hosted services all change this, and Chef, Puppet, and others laid the groundwork to think differently about what the future looks like. More developers today than ever before need to think about how to build and configure cloud software.

We started the Pulumi project to solve this very problem, so I'm admittedly biased, and I hope you forgive the plug -- I only mention it here because I think it contributes to the discussion. Our approach is to simply use general purpose languages like TypeScript, Python, and Go, while still having infrastructure as code. An important thing to realize is that infrastructure as code is based on the idea of a goal state. Using a full blown language to generate that goal state generally doesn't threaten the repeatability, determinism, or robustness of the solution, provided you've got an engine handling state management, diffing, resource CRUD, and so on. We've been able to apply this universally across AWS, Azure, GCP, and Kubernetes, often mixing their configuration in the same program.

Again, I'm biased and want to admit that, however if you're sick of YAML, it's definitely worth checking out. We'd love your feedback:

- Project website: https://pulumi.io/

- All open source on GitHub: https://github.com/pulumi/pulumi

- Example of abstractions: https://blog.pulumi.com/the-fastest-path-to-deploying-kubern...

- Example of serverless as event handlers: https://blog.pulumi.com/lambdas-as-lambdas-the-magic-of-simp...

Pulumi may not be the solution for everyone, but I'm fairly optimistic that this is where we're all heading.

Joe



This is a great analysis, but it's missing a fundamental point: why do we have a problem with these approximations of a programming language or just using a programming language to template stuff?

Because your build then becomes an actual program (i.e. Turing complete) and you have to refactor and maintain it! This is the common problem of using a "programming language as configuration" (e.g. gulp?)

Dhall solves exactly this problem: https://dhall-lang.org

It has the same premises of Pulumi, but without the Turing completeness (I don't know if/how Pulumi avoids that, but if it does it should be part of the pitch), so you cannot shoot yourself in the foot by building an abstraction castle in your build system/infrastructure config.

We use it at work to generate all the Infra-as-Code configurations from a single Dhall config: Terraform, Kubernetes, SQL, etc.

And there is already an integration with Kubernetes: https://github.com/dhall-lang/dhall-kubernetes


> We use it at work to generate all the Infra-as-Code configurations from a single Dhall config

This is the key bit and not something which is pitched well enough from the Dhall landing pages: using straight YAML forces you to repeat yourself in multiple areas for each Individual tool being used, and these repetitions have to stay consistent across multiple tools. What Dhall does is allow you to write a single config and use it to derive the correct configurations for each tool that you use. So you can write a single configuration file from which, eventually, every single part of your system is derived - Terraform infrastructure, Kubernetes objects, application config, everything. When you pull it off, it's simply magical.

You can think of it like this: JavaScript is a horrible, no-good, very bad language, and yet all browser programming is done in JavaScript because every browser supports it - so too, are JSON and YAML horrible configuration languages. But JavaScript gave rise to abstractions like TypeScript which are much better languages which compile down to JavaScript for compatibility. TypeScript is to JavaScript what Dhall is to JSON and YAML - the fact is, pretty much everything is configured with JSON and YAML, and Dhall makes it much, much easier to live in that world, with no need for the systems being configured to support it.

Considering the relative obscurity of Dhall, it's basically the best-kept secret in the DevOps world right now, and it's a shame more people don't know about it.


Dhall appears to be expressive enough that I can't see why you wouldn't have to refactor and maintain the Dhall code?

Writing Dhall code look exactly like programming to me, and the programmer must possess the necessary programming skills to produce good Dhall code. A random guy with a text editor will make an equal mess in Dhall as they would with a “real” programming language.

I don't see how the restrictions in Dhall really help much in this regard. Turing completeness feels like a red herring to me.


Not a user of Dhall, just a fan, but refactoring of Dhall configuration should be extremely easy. You make a change, and your configuration stays the same, which is easy to verify. (Thanks to https://en.wikipedia.org/wiki/Normalization_property_(abstra... )

For TC languages, comparing if two programs (original and refactored) do the same thing is not solvable in general. If the language is not TC then it is more feasible.


You can compare the outputs of two programs.

Sure, a TC program may not finish to produce output you can compare, but in my experience that's only a theoretical problem.


You can do more than just compare the output of two programs in Dhall. You can verify using a semantic integrity check that two programs are the same for all possible inputs. For example:

  $ dhall hash <<< 'λ(x : Natural) → x + 0'
  sha256:986613701cf8cc883c2490af81d5fdcfb0f33f840870acaac21689f57c1baab6

  $ dhall hash <<< 'λ(x : Natural) → x'
  sha256:986613701cf8cc883c2490af81d5fdcfb0f33f840870acaac21689f57c1baab6

  $ dhall hash <<< 'λ(y : Natural) → y'
  sha256:986613701cf8cc883c2490af81d5fdcfb0f33f840870acaac21689f57c1baab6
The cryptographic hash is smart enough that many behavior-preserving changes don't perturb the hash.


Actually with Dhall, you should be able to compare the programs themselves, even without full "input" (there is even example on the Dhall page, see "You can reduce functions to normal form, even when they haven't been applied to all of their arguments").

So you can for example leave some parameters out of your config and still validate the correctness of refactoring.

If you use general purpose programming language, then even comparing just output might be difficult - most languages allow to do I/O, so it's possible that the configuration is dependent on some side channel.

I would say if you are only using general language "sensibly" for configuration then you are effectively restricting yourself in the same way that Dhall does.


This sounds like a render test?


I don't get the problem with using a turing complete language to generate configuration. There's nothing wrong with maintaining and refactoring a program, that's a natural process for any program. If you don't want an infinite loop, don't write one, as you wouldn't in any other program. You can choose as much or as little abstraction as you so wish.

Give me a real language any day over dhall or jsonnet.


This explains the disadvantages of using a general-purpose programming language as a configuration language:

https://github.com/dhall-lang/dhall-lang/wiki/Safety-guarant...


FWIW jsonnet is a "real" language. It's a dynamically typed, lazily evaluated purely functional programming language).


Fair enough. I should have said "general purpose language" rather than "real", which makes for flame-bait.


I once built a mandelbrot fractal renderer which emitted a data-URL encoded PNG string to stdout in BCL (a spiritual predecessor of Jsonnet @ Google).

Yeah, I know what you mean. It lacks generic input/output, you cannot read write arbitrary files and perform arbitrary network requests etc.

I do like that restriction in the context of managing configuration systems, because it allows you to build hermetic evaluations.

With kubecfg we added the ability to import from URLs, which I wish was available out of the box in jsonnet.


This is how Lua started, as a config language, but it gradually added more features that people found useful in config, and became Turing complete.


Lua was TC from the start, it came with the procedural concepts from Modula - if/while/repeat - and functions.


I meant SOL, the predecessor, before Lua proper.


What's so bad about Turing Completeness? I haven't a decent look at Dhall, but I'm betting I could probably write an exponential Dhall program that won't terminate in the lifetime of the universe.

The real reason for giving up Turing equivalence was probably to get dependent types. This gives very powerful static guarantees, including the presence/absence of fields under non-trivial record operations such as merge. In using dependent types, they have also had to give up significantly on type inference, which is really going to annoy the average JavaScript/Ruby programmer.


> you have to refactor and maintain it

You already have to do that, so why not do it in a reasonably powerful language?


Here's a nice explanation on why using "reasonably powerful languages" has many disadvantages: https://github.com/dhall-lang/dhall-lang/wiki/Safety-guarant...

Also you might be familiar with the Rule of Least Power: https://en.wikipedia.org/wiki/Rule_of_least_power


> My belief is that we've been slowly building up to using general purpose languages, one small step at a time, throughout the infrastructure as code, DevOps, and SRE journeys this past 10 years.

I think that you’re right, and I think it’s great, because we have a programming model in which code is data and data is code: Lisp & S-expressions.

It’d be downright awesome to have a Lisp-based system which used dynamic scoping to meld geographical & environmental (e.g. production/development) configuration items. But then, it’d be downright awesome if the world had seriously picked up Lisp in the 80s & 90s, and had spent the last twenty years innovating, rather than reïnventing the wheel, only this time square-shaped. But then, the same thing could be said about Plan 9 …

I’ve not yet had the time to take a look at Pulumi, but I hope to have time soon.


> I think that you’re right, and I think it’s great, because we have a programming model in which code is data and data is code: Lisp & S-expressions.

"Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of CommonLisp."

http://wiki.c2.com/?GreenspunsTenthRuleOfProgramming

Seriously, this has happened again and again and again. You have software, so you configure it via a clean and simple text syntax, then the configuration needs to be generated and the syntax becomes more complicated, then the next system you do has an "API" instead so you can configure it via programming, which is too complicated so the next time you Do it Right and go with a simple text file, which is then outgrown when the configuration it stores becomes too complicated...

It's like a circle of life thing.


And people are vehemently agreeing/disagreeing depending on their phase shift in the Turing complete vs declarative carousel.

Compare with: strongly vs weakly typed languages


That saying was very true of Fortran, reasonably true of C, and mostly don't happen on newer languages.


I think the parts of Lisp that tended to be rebuilt have mostly been incorporated into the newer languages. (At least, it's been a very long time since I've had to rewrite a fundamental data structure, etc.)


You don’t need code-is-data for what your parent is describing. All you need is code that outputs data. Or even better, code that initiates contact with other code.

The only requirement is a commitment to doing things imperatively in a real programming language. It’s hard to resist the temptation to do things declaratively (because it’s easier to imagine a declarative interface that describes your problem than an abstraction of the procedure which will solve it) but you are never forced to.


As the kids say: stop trying to make Lisp happen, it's not going to happen.

It has become yet another community that's fighting a struggle that everyone else ended years ago, like the few Japanese in jungles who refused to surrender. I'm not entirely sure why it's not been adopted, but I suspect it's because most people strongly prefer (a) visually semantically different scope delimiters and (b) function-outside-brackets syntax ie f(a, b) rather than (f a b).

Or you could go the other way and say that JSON is s-exps with curly brackets so it should be made executable as such, and build that language.


> As the kids say: stop trying to make Lisp happen, it's not going to happen.

That's probably true, but I think it's useful to fight the good fight regardless. Even if Lisp & s-expressions don't, in fact, take over the world (and I think they will), arguing in their favour might help increase the chance that whatever inferior technology does end up getting adopted is better than it could have been.

> Or you could go the other way and say that JSON is s-exps with curly brackets so it should be made executable as such, and build that language.

The problem is that without symbols, that ends up being hideously ugly. This:

    ["if",
     ["<", 1, 2],
     "less than",
     "greater than or equal to"
    ]
is appreciably worse than:

    (if (< 1 2)
        "less than"
        "greater than or equal too")
And alternatives like:

    {"if": [[1, "<", 2], "less than", "greater than or equal to"]}
are so much worse that I don't think anyone could seriously expect to use them.


> It has become yet another community that's fighting a struggle that everyone else ended years ago, ... like the few Japanese in jungles who refused to surrender.

Nice imagery, but the wrong point.

Except for the syntax, everybody else joined Lisp.

"We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp." --Guy Steele

Flash back to the mid-1980's (when the mainstream was C, Pascal, BASIC, FORTRAN, COBOL, etc.) and it's Lisp/Scheme (and Smalltalk) that have features like Garbage Collection, interactive development, lexical closures, decent built-in data structures, dynamic typing.

The fact that all of this is commonplace today, both justifies a lot what Lisp did in the first half of its existence and undermines its (technical) competitive advantages now.

> but I suspect it's because most people strongly prefer (a) visually semantically different scope delimiters and (b) function-outside-brackets syntax ie f(a, b) rather than (f a b).

It's not technical. I don't think it ever was. So much of it is around social concerns: a performance stigma dating back to the 1970's, fear of being able to hire people to do the work, fear of what VC's will think, worries that the language will still be available... And then at the end of the day, the problems whatever language will solve are a tiny fraction of the overall problem of doing something relevant and lasting and useful to others.

> As the kids say: stop trying to make Lisp happen, it's not going to happen.

Life is too short and the world is too big to try to confine other people's ideas of how they should think or work.

The point of the market economy and of the scientific process is that people get to try what they think is going to be useful and then let the world decide. The fact that Lisp is still in the conversation at all, when its contemporaries (Autocoder, Fortran) either aren't or are highly specialized, says a lot that we can learn from.


>As the kids say: stop trying to make Lisp happen, it's not going to happen.

Mean girls came out in 2004, no kid knows that movie


Oh my! So web-assembly is not 'happening' then ? May it REST in peace.


I think what you're doing with pulumi is the right answer and it's only a matter of time before this becomes the norm. The author's examples could easily be done with plain ol' JS/ES/TS with more far more extensibility and customization when the need arises.

I also feel this is where JSX got it right. Instead of creating yet-another-templating-language (looking at you Angular!), they used JavaScript and did a great job of outlining how interpolation works. Any new templating language is always going to be missing some key feature you expect out of a general programming language and your customers will continue to ask for more features.

Take for example Terraform and HCL, they're continually adding more and more [templating features](https://github.com/hashicorp/terraform/blob/master/website/d...) and [functions](https://www.terraform.io/docs/configuration/interpolation.ht...) because there's so many different ways to skin configuration/infrastructure as code. What if TF just expect a "computed" JSON object and it's left up to the developer to figure out how to put it together?

I'm gonna keep an eye on Pulumi and hope to be able to use in a real project soon.


Crazy idea, but couldn't we use JSX for configuration?

<AutoScalingGroup name='Main cluster'> <LaunchConfig imageId='ami-xx'>...</LaunchConfig> </AutoScalingGroup>

Paired with Typescript, we would have the clearness of a declarative language, with the power and flexibility of a real language that is also easy to extend and navigate.

As a bonus, most tooling already exists.


You just invented XML!


This is sitting right on the genius/insanity border.


I had the exact same idea! Does something like that exists?


In .NET land, there's Razor, which was designed from get go to mesh well with C# syntax such that you need a minimal amount of control characters:

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/razor...


In ROS we have these XML launch files that are just awful. They have enough features to be a really bad programming language for configuring and launching (often conditionally) numerous robot software nodes.

In ROS2 the launchfile can now just be a Python script. Very much learned all this the hard way and the solution was to just support Python. I think it's brilliant.


AOLServer and our own Tcl based application server also used this idea.

Configuration files for each component were a DSL made of Tcl functions. Each module just sourced the respective file on load.


There are several possible situations:

- the django like situation: the configuration is pure code, and it's a mistake. It was not necessary, it brought plenty of problems. I wish they went with a templated toml file.

- the ansible like situation: the configuration is templated static text. But with something as complex as deployment, they ended up adding more and more constructs, until they created a monstrous DSL on top of a their implementation language, with zero benefits compared to it and plenty of pitfalls. In that case, they should have made a library, with an API and documentation making an emphasis on best practices.

- and of course a big spectrum between those

The thing is, we see configuration as one big problem, but it's not. Not every configuration scenario has the same constraints and goals. Maybe you need to accept several sources of data. Maybe you need validation. Maybe you need generation. Maybe you to be able to change settings live. Maybe you need to enforce immutable settings. Maybe you need to pub sub your settings. Maybe you need to share them in a central place. Maybe they are just for you. Maybe you want them to be distributed. Maybe you need logic. Maybe you want to be protected from logic. Maybe the user can input settings. Maybe you just read conf. Maybe you generate it.

So many possibilities. And that's why there is not a single configuration tool.

What you would need, is a configuration framework, dealing with things like marging conf, parsing file, getting conf from the network, expressing constraints, etc.

But if you recreate a DSL for your config, it's probably wrong.


In defence of Django, the way settings.py works has been very stable for the entire lifetime of Django.

It may have its problems (I don't have many issues with it) but it doesn't seem to have this problem of attracting ever more layers of abstraction on top of it. It works.


Actually, I think settings.py is not a bad idea, but it's half backed.

There should have a schema checking the setting file. There should have a better way to extend settings, and make different settings according to context, such as prod, staging or dev.

There should be a linter avoiding stupid mistakes like missing a coma in a tuple, resulting in string concatenation.

There should be variables giving you basic stuff like current dir, log dir, var dir, etc. We all make them anyway.

And there should be a better to debug the import settings problem.

But all in all, it's quick and easy to edit, and very powerful.


There is already a mechanism to validate the settings.py file inside django.

The different context stuff can be handled by using env vars, and a nice python wrapper, like python-decouple.


> There is already a mechanism to validate the settings.py file inside django.

It's not exposed, but it's very limited.

> The different context stuff can be handled by using env vars, and a nice python wrapper, like python-decouple.

It's just one of the way to do it. Go to a new project, they use a different way. The main benefit of Django is the fact that a Django project is well integrated, and you find similar conventions and structure from project to project, allowing to reuse the skill you learned and build an ecosystem of pluggable app.


Just so we're on the same page, this is the validation I was referring to -

https://docs.djangoproject.com/en/2.1/topics/checks/

Standardization is always an issue, I guess. Env vars seem to be the norm in the community in my experience, whatever that's worth..


Ah the stuff used for the password ?

I would be more of a fan of something like marshmallow, checking the whole thing.


> it brought plenty of problems

Does anybody here personally suffered those problems that the Turing complete Django configuration creates? (I mean, not the ones caused by lack of a completness checks, or good library support, but the ones caused by too much power.)

If so, how do those problems look like?


Now that you say it, it's true I didn't have problems with too much power.

I never had an untrusted party editing my config, nor did I use data from any.

Also, you can make the same mistakes in the setting file that in any code file, but it's not more or less important.

In fact, all the problem I had could have been solved by better integration: solving the import problem, making composition easy, adding checks, allow loading data from several sources and merge them, presenting them in a unify interface.

If I'm being honest, problem with settings.py may have not been that it's Python, but that it's a flat file with no strong conventions, tooling or best practices.

I could raise the issue that you can't read the config from another language, but I never had to, and good tooling would allow a synced export or an API to consume the settings.

Same for writing, or live settings.


After years of working with cfengine then ansible I finally went to a bespoke BSD ports work alike with optional client/server and json configuration components. Never looked back.


What does it look like ?


RCS stored directory based modules with tasks in subdirectories. Make or shell script style module execution as part of each task dir + variable files containing settings for the install task. Json configuration files that define all necessary module params (ex:log, task selection, stop on error, initialization, build command per task, etc...) remote scheduling of module/task execution via per agent sysv ipc command queue serviced by a JSON-RPC microsvc which allows both serialized and non blocking task scheduling by queue priority.


I owned the majority of the configuration system and ecosystem for Borg, Google's internal cluster management and application platform.

Unfortunately, what described here is good in many level, but not excellent in any.

If you are OK to describe the complexity of your infrastructure in a programming language similar to the general purpose language, then a well abstracted API built on original APIs from cloud providers are more familiar to devs. And it will be more reliable performance and flexible.

If you want a config experience, something like kustomize is leaner and more compatible with the text config model.

I also cannot see how this interoperate with other tools, which will seriously limit it's appeals to people using other tools.


The problem with code as configuration is that the config file is indeterministic and it takes longer to extract information from the file.

This has long been a problem in the python/pip community, as its basically impossible for the build tools to determine the dependencies of a package without fully downloading and running the setup.py file.

Static config files are static for a reason!


Unless you import rand() your code should be deterministic. You're right about needing to run the thing to get the data (that's the point) but there is a middle ground between pure literals and fully side effects code. By example you could impose pure functions (no side effects).


That's exactly what Dhall is doing.


That's what Haskell already does. Dhall is optimizing on different dimensions (making sure the script execution ends, making the scripts verifiable at static time, making it convenient to marge files, making it convenient to centralize your configuration).


As a happy pulumi user, I have to say I am very impressed with the experience. An order of magnitude improvement on maintainability over our old terraform code base. Highly recommended.


This is my experience and it's clearly biased from maybe one bad example but ... Scons is an example of code over configuration and from what I could tell I never met someone that truly understood it. Because it was code over configuration every programmer added their own interpretation of what was supposed to happen, no programmer truly understood what was really going one and it turned into one giant mess of trying to understand different programmers hacks and code to get the build to work. I'm sure some Scons expert will tell me how I'm full of crap but I'm just saying, that's my experience.

So, what's my point? My point is configuration languages help in that they push "the one true way" and help to enforce it. Sure there are times you end up having to work around the one true way but given very powerful tools of a full language for configuration leads to chaos or at least that's my experience. Instead of being able to glance at the configuration and understand what's happening because it follows the one true way you instead end up with configuration language per programmer since every programmer will code up stuff a different way.


For what it's worth--I've been using Pulumi on a couple of different projects and, today, I couldn't imagine starting a cloud-based project on anything else. The Pulumi team has spent more time than almost anybody I know on understanding how to attack these problems; I guess I have a bit of an understanding of just how much work that is, as I've tried to do the same thing and their solution is better.

I appreciate that their revenue model doesn't require making the open-source version frustrating or stupid and I appreciate that they're incredibly responsive. And some of the stuff you'll see around cloud functions/Lambdas and the deployment thereof will fucking blow your mind.

It's good. You should strongly consider it.


I have been using ksonnet but that is now officially dead. Working with jsonnet seemed unnecessarily painful when coming from coding typescript. This information is quite timely and welcome, I'll look further at the ts example.


We have ksonnet expats on the team (we're all in cloud city -- Seattle), and I've been keeping an eye on that project myself, since I think it got a lot of things right and frankly many of the ideas for Pulumi were inspired by early chats with the Heptio team. But, as you say, why create a new language when an existing one will do -- that was our original stance and it's working great in practice.

Joe Beda will be doing a deep dive on Pulumi on the TGIK videocast tomorrow, so it's a timely opportunity to check it out: https://twitter.com/jbeda/status/1092963296565587969


OP here. I actually wrote a post about Pulumi in this very space a while back

https://leebriggs.co.uk/blog/2018/09/20/using-pulumi-for-k8s...

I do think this is more like what we should be doing, but as dismayed to see Pulumi’s free tier get sunsetted


Our free tier is still there and here to stay. What did we do to make you think it's been sunsetted? :-(


Oh! I don’t know where I got that impression from then! perhaps I just thought that we couldn’t use the free tier because of the number of licenses we’d need, but you’re right, it’s still there!


Build files (e.g. makefiles are their various descendants like SCons, rake, etc) seem to be in the same general boat except very early on mixing "real languages" (or at least shell scripting) was obviously allowed so they've always leaned far more towards the "yes, it is a general purpose language" end of the spectrum.


> My belief is that we've been slowly building up to using general purpose languages, one small step at a time, throughout the infrastructure as code, DevOps, and SRE journeys this past 10 years. INI files, XML, JSON, and YAML aren't sufficiently expressive -- lacking for loops, conditionals, variable references, and any sort of abstraction -- so, of course, we add templates to it. But as the author (IMHO rightfully) points out, we just end up with a funky, poor approximation of a language.

This is the why I prefer to use a JS file for configuration instead of native JSON or YAML file if those options are available.


Also see `webpack` as a successful example of code-as-configuration in the wild.


Not sure if it was successful when people call it a hell to maintain and newer simpler alternatives like Parcel is gaining popularity.


I still don't know how to get it to do exactly what I want. There is far too much magic involved, and experience has long demonstrated that magic is bad (Webpack confirms that belief).

That being said, the concept of defining a function in, essentially, a config file seems like a step in the right direction. I don't think I'd trust that functionality outside of builds or infra-as-code, though.


What's magic about webpack? The online documentation provides quite a lot of insight into how it all fits together.

It probably only seems like magic because you didn't build a fundamental understanding of how it works before using it. I use some massive webpack configurations and I understand them all quite thoroughly thanks to well-written, modularized configuration files.


For 10 years of Java/Android/Scala coding there was no need to understand how compilers combine everything into one JAR.


Javascript is a scripting language without native module support. That isn't Webpack's fault.

Webpack also handles much, much more than just Javascript. It handles CSS, HTML, images, files, pretty much any kind of asset. Java/Scala doesn't have anything like that. Asset management is completely different due to the nature of how assets are transferred to the client.

And Android? Give me a break. The moment you stray from the strict layout of an Android app you run into a wall and have to learn how Gradle operates. This strict layout is good for some but others hate when an environment forces particular constraints upon them.

Webpack is completely configurable at every stage, works with plugins (which compilers don't do) and again, isn't magic. Not knowing how something works doesn't make it magic. That's not what magic means with respect to code.

Besides... Maybe if you just like getting by, you can program in C/Java/etc without learning about compilers. Web dev is fucked and transpiler knowledge is basically required, but sure you can get by in other domains without it. But if you want to be a good programmer, an expert at what you do, someone who lives and breathes and understands computer science, someone who will excel in his career and not remain a code monkey forever... You have to learn about how your compilers work just like you should know how the silicon in your computer is doing its own "magic".


It was very successful. Complicated projects require complicated build config. Parcel does fine for simple projects, but lacks the raw power & configurability of webpack.

Webpack now does simple config as well with the 'mode: "production"' and 'mode: "development"' presets.


Hi, is Pulumi an generalized AWS CDK(https://github.com/awslabs/aws-cdk/blob/master/examples/cdk-...)? Looks pretty similar :D


Having dealt with puppet, cloudformation, ansible and other solutions that have gone in and out of fashion and also dealing regularly with Kotlin, Java, Javascript, and recently typescript, my view is that configuration files are essentially DSLs.

DSLs ought to be type safe and type checked since getting things wrong means all kinds of trouble. E.g. with cloudformation I've wasted countless hours googling for all sort of arcane weirdness that amazon people managed to come up with in terms of property names and their values. Getting that wrong means having to dig through tons of obscure errors and output. Debugging broken cloudformation templates is a great argument against how that particular system was designed. It basically requires you know everything listed ever in the vastness of its documentation hell and somehow be able to produce thousands of lines of json/yaml without making a single mistake, which is about as likely as it sounds. Don't get me started on puppet. Very pleased to not have that in my life anymore.

On a positive note, kotlin recently became a supported language for defining gradle build files in. Awesome stuff. Those used to be written in Groovy. The main difference: kotlin is statically compiled and tools like intellij can now tell you when your build file is obviously wrong and autocomplete both the standard stuff as well as any custom things you hooked up. Makes the whole thing much easier to customise and it just removes a whole lot of uncertainty around the "why doesn't this work" kind of stuff that I regularly experience with groovy based gradle files.

Not that I'm arguing using Kotlin in place of Json/yaml. But typescript seems like a sane choice. Json is actually valid javascript, which in turn is valid typescript. Add some interfaces and boom you suddenly have type safety. Now using a number instead of a boolean or string is obviously wrong. Also typescript can do multi line strings, comments, etc. and it supports embedding expressions in strings. No need to reinvent all of that and template JSON when you could just be writing type script.

I recently moved a yaml based localization file to typescript. Only took a few minutes. This resulted in zero extra verbosity (all the types are inferred) but I gained type safety. Any missing language strings are now errors that vs code will tell me about and I can now autocomplete language strings all over the code base which saves me from having to look them up and copy paste them around. So no pain, plenty of gain.

And yes, people are ahead of me and there are actually several projects out there offering typescript support for cloudformation as well.


To go with your general line of thought, see how many JS-based projects are increasingly moving towards a JS file with a default export as a config file.


This looks absolutely great! I’ll give this a thorough look over for our coming API development/deployment.

Thanks for plugging!


Are you going to add C# support?


Definitely. I was a part of C# in the early days, so little else would make me happier than awesome class .NET support. This'll be great for Azure folks -- who knows, PowerShell too?

We are actively working on https://github.com/pulumi/pulumi/issues/2430, which will make it easier for our small team to manage multiple languages. Once that lands, I would expect this to be high priority.

Some of our amazing community members have been prototyping this, and it's looking pretty promising: https://twitter.com/MikhailShilkov/status/109278757393889689....

Joe


> Definitely. I was a part of C# in the early days, so little else would make me happier than awesome class .NET support. This'll be great for Azure folks -- who knows, PowerShell too?

Powershell would be great, it has nice support for building DSLs.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: