I don't know if any of the links below will count as crying; but here are some, from the British media reporting on Russia:
- BBC, 2018: Russia: Google removes Putin critic's ads from YouTube https://www.bbc.co.uk/news/technology-45471519
- BBC, 2021: How Russia tries to censor Western social media https://www.bbc.co.uk/news/blogs-trending-59687496
- BBC, 2021: Russia slows down Twitter over 'banned content' https://www.bbc.co.uk/news/world-europe-56344304
- BBC, 2021: Russia threatens YouTube ban for deleting RT channels https://www.bbc.co.uk/news/technology-58737433
- BBC, 2021: Russia threatens to slow down Google over banned content https://www.bbc.co.uk/news/technology-57241779
- Reuters, 2022: Russia blocks access to BBC and Voice of America websites https://www.reuters.com/business/media-telecom/russia-restricts-access-bbc-russian-service-radio-liberty-ria-2022-03-04/
- The Guardian, 2022: Russia blocks access to Facebook and Twitter https://www.theguardian.com/world/2022/mar/04/russia-completely-blocks-access-to-facebook-and-twitter
- BBC, 2022: Russia restricts social media access https://www.bbc.co.uk/news/technology-60533083
- BBC, 2022: Russia confirms Meta's designation as extremist https://www.bbc.co.uk/news/technology-63218095
- BBC, 2024: Data shows YouTube 'practically blocked' in Russia - https://monitoring.bbc.co.uk/product/b0003111
- BBC, 2024: Russia's 2024 digital crackdown reshapes social media landscape - https://monitoring.bbc.co.uk/product/b0003arza
"The EU condemns the totally unfounded decision by the Russian authorities to block access to over eighty European media in Russia.
This decision further restricts access to free and independent information and expands the already severe media censorship in Russia. The banned European media work according to journalistic principles and standards. They give factual information, also to Russian audiences, including on Russia’s illegal war of aggression against Ukraine.
In contrast, the Russian disinformation and propaganda outlets, against which the EU has introduced restrictive measures, do not represent a free and independent media. Their broadcasting activities in the EU have been suspended because these outlets are under the control of the Russian authorities and they are instrumental in supporting the war of aggression against Ukraine.
Respect for the freedom of expression and media is a core value for the EU. It will continue supporting availability of factual information also to audiences in Russia."[0]
FWIW the native and WASM versions of my home computer emulators are within about 5% of each other (on an ARM Mac), e.g. more or less 'measuring noise':
Maybe the emulator code is particularly WASM friendly ... it's mostly bit twiddling on 64-bit integers with very little regular integer math (except incrementing counters) and relatively few memory load/stores.
Yet still the 'raw' pixel data of old games rendered on modern displays without any filtering also doesn't look anything like they looked on CRT monitors (and even on CRT monitors there's a huge range between "game console connected to a dirt cheap tv via coax cable" and "desktop publishing workstation connected to professional monitor via VGA cable").
All the CRT shaders are just compromises on the 'correctness' vs 'aesthetics' vs 'performance' triangle (and everybody has a different sweet spot in this triangle, that's why there are so many CRT shaders to choose from).
This CRT shader actually has a flicker slider. But 'brain melting flicker' sounds more like you were gaming with a 50Hz PAL console (or home computer) on a professional computer monitor which was intended for higher frequencies (like 72Hz). Regular TVs normally had plenty of 'afterglow' to reduce flicker.
Hmm, but game objects is exactly the popular use case where traditional inheritance breaks down first and composition makes much more sense? That's why the whole Entity-Component idea came up decades ago (like in Unity, long before the more modern column-database-like ECS designs).
Entity systems still rely on inheritance as you typically have an Entity baseclass that all entities derive from. That's just a single level of inheritance. Unity does this through MonoBehavior. Unreal is more inheritance-heavy but developers typically subclass something like `Actor` to one level of additional inheritance beyond the Engine's subclasses. A lot of engines will have multiple superclasses behind a Entity class, but that's an implementation detail of the engine itself and to the game developer, it's treated as a single base class that is typically subclassed a single time for the entity itself.
Even in ECS's you will often find inheritance. Some implementations have you inherit a Component struct, others will have the systems inherit a System class.
I'm sure it's still used today in some engines and by some developers but the overwhelming opinion is that doing something like Entity -> Actor -> Monster -> Orc -> IceOrc is a bad idea. Instead it would be like
And yeah, they favour composition re. Components, it's just that the components tend to inherit from a Component class. But I would still call it composition!
It is, but you get the benefit of shared behaviour and state.
Like an Entity might have a Position, a reference to the World, the Screen, methods for interacting with other entities, etc. You don't get that from simply implementing an interface, although it's not difficult to pass those into the object. A common example I've seen is having every class extend an EventEmitter superclass. You could implement that as part of the interface but that becomes a ton of duplication.
I think of it like this: If you model your domain as something like `A : B : C : D {}` you get all the problems of inheritance, when simply doing D { A; B; C; } gives you the same benefits without the problems. Doing `A : X {}, B : X {}, C : X {}` sidesteps most of the problems with inheritance but gives you some of the benefits as well.
...I'm pretty sure the same would be true in any modern version of NeXTStep had it survived as its own 'brand' (apart from slightly different requirements caused by the hardware the OS needs to run on of course - e.g. running on a handful different Apple devices versus having to work on 'everything').
Darwin ditched the old driver stack for IOKit because they thought it was icky to have ObjC in the kernel. That's pretty much entirely up to leadership changes, not technical reasons.
Agreed, the same way Longhorn, Midori and Singularity failed to win the hearts of Windows team, while Android and ChromeOS obliterated their mobile and US school market.
Turns out using a managed userspace is viable, if management is on board to support the development all way through.
IMHO the downsides of tagged unions (e.g. what Rust confusingly calls "enums") are big enough that they should only be used rarely if at all in a systems programming language since they're shoehoerning a dynamic type system concept back into an otherwise statically typed language.
A tagged union always needs at least as much memory as the biggest type, but even worse, they nudge the programmer towards 'any-types', which basically moves the type checking from compile-time to run-time, but then why use a statically typed language at all?
And even if they are useful in some rare situations, are the advantages big enough to justify wasting 'syntax surface' instead of rolling your own tagged unions when needed?
tagged unions (not enums, sorry) are not a dynamic type system concept. Actually, I would not be able to name a single dynamically typed language that has them.
As for the memory allocation, I can't see why any object should have the size of the largest alternative. When I do the manual equivalent of a tagged union in C (ie. a struct with a tag followed by a union) I malloc only the required size, and a function receiving a pointer to this object has better not assume any size before looking at the tag. Oh you mean when the object is automatically allocated on the stack, or stored in an array? Yes then, sure. But that's going to be small change if it's on the stack and for the array, well there is no way around it ; if it does not suit your design then have only the tags on the array?
Tagged unions are a thing, whether the language helps or not. When I program in a language that has them then it's probably a sizeable fraction of all the types I define. I believe they are fundamental to programming, and I'd prefer the language to help with syntax and some basic sanity checks; Like, with a dynamical sizeof that to reads the tag so it's easier to malloc the right amount, or a syntax that makes it impossible to access the wrong field (ie. any lightweight pattern matching will do).
In other words, I couldn't really figure out the downside you had in mind :)
> Actually, I would not be able to name a single dynamically typed language that has them.
That's because every type in a dynamically typed language is a tagged union ;) For instance in Javascript you need to inspect a variable with 'typeof' to find out if it is a string, a boolean, a number or something else.
In a dynamically typed language, the runtime system needs to carry information around what type an item actually is, and this is the same thing as the type-tag in a tagged union - and Rust's match is the same sort of runtime type inspection as the typeof in JS, just with slightly different syntax sugar.
> As for the memory allocation, I can't see why any object should have the size of the largest alternative.
...then every Bla object is always at least 16 bytes even when the active item is 'AByte' (assuming an empty String also fits into 16 bytes). Plain unions in C have the same problem of course, but those are rarely used (the one thing where unions are really useful in C (not C++!) is to have different views on the same memory).
> When I program in a language that has them then it's probably a sizeable fraction of all the types I define
...IMHO 'almost always sum types' is a serious design smell, it might be ok in 'everything is a reference' languages like Typescript, but that's because you pay for the runtime overhead anyway, no matter if sum types are used or not.
I don't think we speak the same language.
I was refering to the use case (in C) that's been described by sph.
Where you indeed malloc only the relevant size, and you have to manually and carefully check the tag before casting the payload into the proper type. This is what I am tired of doing over and over and over again, and would like a system programing language to help with.
...what else is a select on a tagged union than 'runtime casting' though. You have a single 'sum type' which you don't know what concrete type it actually is at runtime until you look at the tag and 'cast' to the concrete type associated with the tag. The fact that some languages have syntax sugar for the selection doesn't make the runtime overhead magically disappear.
Not sure why you call it runtime overhead. That’s core logic, nothing fancy, to have a pointer to a number of possible types. That’s what `void *` is, and sometimes you want a little logic to restrict the space of possibilities to a few different choices.
Not having syntactic sugar for this ultra-common use case doesn’t make it disappear. It just makes it more tedious.
There are many implementations and names, and what I refer to runtime casting/any type, which is unnecessary for low-level programming, is the one that uses types and reflection at runtime to be 100% sure you are casting to the correct type. Like Go’s pattern (syntax might be a bit off):
var s *string
var unknown interface{}
// panics at runtime if unknown is not a string pointer
s = unknown.(*string)
This is overkill for low-level programming and has much higher overhead (i.e. having to store type info in the binary, fat pointers, etc.) than tagged unions, which are the bread and butter of computing.
How would you integrate C3 with other programming languages (not just C), or even talk to operating systems if you don't implement a common ABI?
And the various system ABIs supported by C compilers are the defacto standards for that (contrary to popular belief there is no such thing as a "C ABI" - those ABIs are commonly defined by OS and CPU vendors, C compilers need to implement those ABIs just like any other compiler toolchain if they want to talk to operating system interfaces or call into libraries compiled with different compilers from different languages).
> How would you integrate C3 with other programming languages (not just C)
That's the job of an FFI. The internal ABI of most languages isn't anything like their FFI, eg any garbage collected language can't use the OS "C" ABI.
Most operating systems don't use the same ABI for kernel syscalls and userland libraries either. (Darwin is an exception where you do have to link a library instead of making syscalls yourself.)
> contrary to popular belief there is no such thing as a "C ABI"
It is a "C ABI" if it has eg null-terminated strings and varargs with no way to do bounds checking.
Tbh on such a bare bones system I would use my own trivial arena bump allocator and only do a single malloc at startup and a single free before shutdown (if at all, because why even use the C stdlib on embedded systems instead of talking directly to the OS or hardware)
That "cyring" must have been awfully quiet, I didn't hear anything at least.
reply