Hacker Newsnew | past | comments | ask | show | jobs | submit | qual's commentslogin

(Not the person you were replying to)

I'm curious what/how you define "woke culture", because the only definitions of "woke" I've ever heard are basically "thing I don't like" or "the left". Neither of those definitions have helped me understand what you are so vehemently against.

Can you help me understand what woke culture is to you?

This is a genuine question.


(Not the person you were asking, either)

In one sentence, wokeness is aggressively pursuing racial (and other identity-based) conflict above everything else, not caring what else you trample on (including even classic civil rights principles). "Microaggressions" is probably the most distinctively woke concept.

Censoring gun emojis is not woke in that respect, since it doesn't pursue any particular identity conflict (except, maybe, crusading on behalf of anyone with gun-related PTSD?). But the method (forcing people to sanitize their communication) and some aspects of the motivation (hand-wringing about some bits of language, acting like some people have extremely delicate sensibilities, using that to justify censorship) are similar to some prominent manifestations of wokeness and the corporate policies they push. So I can understand why the connection was drawn, though the term isn't quite appropriate.


Thanks for taking the time to write this out, I think it helps me understand a bit better. Every definition seems to be sort of different and personalized but I think it's beginning to coalesce into something in my mind, rather than just leaving me confused.

Usually when I try to ask this question, I just have angry people being angry with me, and I end up more confused. So it's nice to have some legitimate explanations come my way.


I'm a big fan of the Origin Story podcast, where they dig into words/concepts and look at where they started and how the use has evolved over time. They also cover some things like people, or conspiracy theories.

Their Woke episode might be worth a listen.

https://open.spotify.com/episode/1ncWO9Aj1kMtQxBsHf0aYg

https://pod.link/1624704966/episode/0b0e2c363a94280fe0ae576b...


Sure, glad to help. I can give more detail including etymology.

"Woke" originally meant that a person has "awoken" to the true nature of society. They see the thing that others do not see! Unfortunately, it seems that that's also the only thing they see, or the only thing they care about. And that thing is racial / identity-group conflict. As I say, they cheerfully sacrifice principles like freedom of speech, meritocracy, proportionality, hard-fought civil rights ideals, etc., when it's inconvenient for the fight they want to pick in the moment. I think that, to a first approximation, when anyone says "Is this going too far?", they get accused of sympathizing with the oppressors and shamed or cast out, so the group goes farther, and we are seeing the results of iterating this process for decades. A peak moment was tearing down a statue of Abraham Lincoln, apparently in the name of anticolonialism.

That's the woke people themselves, a small, aggressive, highly vocal group with outsized influence. (There's also a larger group of relatively normal people with a surface-level understanding of it who go along with most of it; I think if they understood and believed all the details, most of them would recoil.) (Incidentally, most people instinctively understand that aggressively attacking others is bad, but via concepts like "microaggressions" and "privilege", the woke frequently find reasons to think that someone else attacked first.)

Then there's the legal and corporate environment. Civil rights legislation brought laws against discrimination and creating a hostile environment for identity groups in companies. In the absence of mind-reading, a claim of discrimination is difficult to prove or disprove, and a "hostile environment" could be interpreted in many ways. I think that the "game of telephone" chain from the text of the law, to the interpretation of the law (and evidentiary standards) in courts, to lawyers' understanding, to what company lawyers tell management, to what sticks in management's brain and gets implemented, has yielded essentially "If you offend the woke identity groups, then that puts you at significant legal risk, so best appease them. Also it's good to pay them lip service, host Pride events, etc., because that reduces legal risk." Additionally, some portion of management themselves are woke, and some additional portion earnestly believe in civil rights as a cause and tend to take the woke at their word when they claim their policies are the right and proper next steps. And finally, on a personal level, expressing dissent against a woke-aligned policy is, in many cases, perceived to put one's own career at risk (in some cases the lawyers may say it's a liability to keep a person like that; and, having seen cases of this, many people err on the side of caution, creating a "chilling effect").

So you get a lot of corporate policies meant to appease the woke. Some may be the exact policies the woke asked for. Others may be cynical appeasement, and by design they will be difficult to distinguish from the first type. Some policies are invented by well-intentioned people believing in civil rights principles. Some are the outcome of some kind of compromise.

It's not necessarily clear which of these should be called "woke policies", both in theory and in practice. It doesn't help that, whereas you could identify Democrat or Republican policies by looking at official websites, and probably get reasonable consensus on what are "conservative policies" or "progressive policies" based on prominent self-identified conservative figures and institutions, the woke do not call themselves "woke" (with occasional exceptions like "Woke Kindergarten") and object to anyone else calling them that; it's understood to be a pejorative. The woke will tend to call themselves advocates of civil rights or of specific groups' rights, and try to blur any distinctions between the classic civil rights movement which enjoys majority support (things like non-discrimination and gay marriage) and what they're trying to do (things like discriminating in favor of their identity groups and policing pronouns). This serves to make the case that anyone who disagrees with them is a 1950s Jim Crow racist, and anyone who isn't a 1950s Jim Crow racist must agree with them, which is useful for coercing acquiescence and support. It also makes it harder for the rest of us to notice the patterns and call out the damage that the woke have done and are likely to do. But I think we're getting there.


I think micro-aggressions are very real, but maybe the word is a little off. They're not aggressive necessarily, but they are micro.

What I mean is that definitely phrases and questions change meaning, a tiny bit, based off of historical understanding of race and gender. On the surface it seems like a wild proposition, but it's really true.

- wow you're so articulate! - your hair looks so clean! - don't you like this kind of music? - you're a better driver than I expected! - you're so nurturing!

To you, or me, innocuous. But people say these things for a reason. I've never heard a white man be told he's articulate, or that his hair looks clean. Do you know what I mean?

These things are racially charged and "othering", regardless of if the perpetrator knows it or intends it. I'd say the vast majority of ANY prejudice is unconscious, meaning people don't know they're it.


Should the App Store allow pornography? What about apps by terrorist organizations? Where does censorship begin to make sense to you?


> Can you help me understand what woke culture is to you?

Within this context it’s replacing guns with water pistols, systematically and comprehensively, in the name of fighting gun violence.

On the same token, it’s replacing a water pistol emoji with a gun—in the aftermath of an armed assassination attempt on a Presidential candidate (and former President)—on a platform that regularly censors, in the name of free speech.

Performative nonsense designed to appeal to emotions instead of doing something about the implied problem. (Guns and censorship, respectively.)


>Performative nonsense designed to appeal to emotions instead of doing something about the implied problem. (Guns and censorship, respectively.)

Thanks! This helps me understand it a bit more. Sort of a synonym for "virtue signalling" it seems?


> Sort of a synonym for "virtue signalling" it seems?

I genuinely have no idea what that term means anymore.


"Woke" is best described as an umbrella term for overrighteous performative moralizing leftist authoritarian fundamentalism. It's effectively used opposite of "fascism" which has become an umbrella term for "things I don't like" or "the right".


It's definitely an umbrella term and we'd be here for a while discussing all facets of it, but in this case in particular it's the movement to censor speech under the guise of anti-gun rhetoric (like an emoji will cause gun violence). That is no different than Christians trying to ban violent video games.

Note that I am not the one who brought up the term to the conversation. I did want to avoid the labels as there's always someone who comes up and ask you to define the label instead of talking about the issue itself. In this case, censorship.


>Note that I am not the one who brought up the term to the conversation.

Of course, but since you said you were vehemently against it, I thought you'd be the better person to give me some perspective and help me learn.

>I did want to avoid the labels as there's always someone who comes up and ask you to define the label instead of talking about the issue itself. In this case, censorship.

I would have found it much clearer if your comment said "I am against censorship at it's core, vehemently", and as an added bonus you wouldn't be annoyed by me asking about it.

But, to be clear, the reason I asked about the label instead of the issue is because I don't understand what issue(s) woke culture represents to you. So trying to talk about those issues would be difficult.

My impression so far is that woke culture is more than just censorship. I'm very anti-censorship, but I've also been called "woke" in passing as an insult, so unfortunately I'm still left a bit confused. Thanks anyways, though!


Umbrella Term is a optimistic description for something so nebulous and used so frivolously. It is more commonly used as "anything I don't like is woke" and "any opinion that doesn't match mine is censoring me."


Censorship has a very specific definition. And changing the pistol emoji to prevent certain ideas or politics from being communicated was censorship.

As for woke - I don't understand why people ask for a definition. A quick search brings up many useful definitions: https://www.urbandictionary.com/define.php?term=Woke


>I don't understand why people ask for a definition. A quick search brings up many useful definitions

Note how the definition there doesn't match any of the three definitions people gave me here, and none of the three here seem to match each other. Also, as your link says, the definition seems to have changed drastically in the past few years, so I can't even be sure that the link has the most up-to-date definition.

So, I was asking someone who seemed to be very passionate about being "against woke culture", to hear it directly from them. This is a conversational site, I figured it'd be fine, but seeing that my question is now downvoted I guess I need to better learn what conversations and questions are appropriate here.

I'm still new around these parts. Forgive me.


>If you know the hash of some data, then you either already have the data yourself, or you learned the hash from someone who had the data.

From the article, you do not need to have the data nor learn the hash from someone who had the data.

>Commit hashes can be brute forced through GitHub’s UI, particularly because the git protocol permits the use of short SHA-1 values when referencing a commit. A short SHA-1 value is the minimum number of characters required to avoid a collision with another commit hash, with an absolute minimum of 4. The keyspace of all 4 character SHA-1 values is 65,536


In which case, yeah, thats a vulnerability. They shouldn't allow a short hash to match up against anything but public data.


It's common to use short hash in pull request, and then modify or rebase the commits.

The solutions are:

* Force people to use the full hash.

* Get use to a lot of dead links.

* Claim that it's a feature, not a bug.


* Force people to use the full hash for commits pushed now on?


* Check visibility at the time of posting.


>Come on, this is not surprising.

Very cool that it is not surprising to you.

But to others (some are even in this thread!) it is both new and surprising. They unfortunately missed your 4 year old comment, but at least they get to learn it now.


Could you help me understand what you are suggesting is done instead?

To me, it seems like you're suggesting that vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage. But that seems so silly that I must be missing some alternative that you're thinking about.


> it seems like you're suggesting that vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage

That's how security mostly works in meatspace, yes.

In the specific case of internet connected software the industry has a lot of experience saying that if something is exploitable then someone will come along and exploit, so we don't normally need to see an example of it happening in the real world first. It's sufficient to assume that if you get popular enough, a professional blackhat will find your bugs and exploit them. It's also reasonable to assume that the cost of a fix is low and the cost of change in the field is also low.

Outside that context the threat models are usually unclear and refined through experience. If you notice someone cut through a wire fence to steal some equipment from a cell tower maybe you build a wall around it instead. But if nobody is stealing anything there's no point in pre-emptively trying to guess that it might happen and building lots of walls because that might just be a waste of resources (perhaps there's no market for stolen tower equipment, so protecting it better would be a waste of resources).


> vulnerabilities are just left in play until someone malicious comes along and decides to do some real damage. But that seems so silly

Well, that's exactly how it tends to work for housing, so I think GP's point is that if it works there it should work here. However, I disagree because the stakes are so different (harming a single family who are free to harden however they like, versus harming the general public who are at the mercy of whatever hardening is done for them).


You've noticed an issue.

You let the manufacturer know, and you let them decide for the next steps.

No ultimatum to threaten to disclose to the public or to ruin their reputation, it's not your business.

In the meantime, you keep it for yourself.

You helped: no lawyers, no problems.

If really there is a safety issue, after a reasonable period of time you can inform the regulators, as it is their job to assess safety.

This is responsible disclosure, not TMZ-style public-shaming.


You're presenting this as if its a new idea, but the security industry tried the above (for the majority of the time that "computer security" has been a thing) and... it didn't work! That's the whole reason public disclosure came about in the first place -- there's quite a rich history there if you're interested.

Some other thoughts:

>You let the manufacturer know, and you let them decide for the next steps.

Which, as history has proven, the "next steps" is generally to sweep it under the rug and to be forgotten about until it's exploited by a bad actor.

>it's not your business

But, what about when it is? On-topic: I drive a car, so I care about vulnerabilities in traffic lights and they may directly affect me. It's also my business if my personal data is stolen, or my identity, or corporate data, etc.

>You helped: no lawyers, no problems.

No problems... Until the vulnerability is exploited and it causes me a problem.


> No ultimatum to threaten to disclose to the public or to ruin their reputation, it's not your business.

I found an authentication bypass in a door card access controller. Per the installer I was working with the units are regularly exposed directly to the Internet. (Heck, the installer was trying to cajole my Customer into doing it for "remote support" reasons.)

Given that there's an impact to the public-- albeit not necessarily directly safety-related-- I think this kind of vulnerability is still "my business".

If I owned one of these controllers and it was "protecting" my property I'd want to know.

(Fun aside: The installer went so far as to suggest that because their other Customers expose these units to the Internet-- particularly a small bank who is "audited" for "security"-- it would be okay if my Customer did it. Needless to say, my Customer did not. I let my Customer know about the auth. bypass and we kept the unit locked down in a VLAN w/ a restrictive ACL, but I never publicly disclosed... too afraid of hostile response from the vendor. Eventually a researcher did find it and disclose it publicly, at least...)


>Your outrage sounds disingenuous.

I've read through these comment chains a few times, but I'm having a really hard time finding the "outrage", disingenuous or otherwise. Can you quote the part of the comment that displayed outrage?

>Do you and their security engineers even live in the same time zone?

Reading through this thread, you can find where the OP says that the time zones were accounted for.


This is a symptom of a broader narrative perpetuated the company itself, that any criticism is from "haters".


I know nothing about the product they are selling. As an engineer, I think writing a critical blog post after giving just a 1.5 day notice is bad behavior.


Wikipedia has it incorrect then, as they list it as "formerly web.com" ("Network Solutions, LLC, formerly Web.com is an American-based technology company"). Thanks for the clarification!


Most data destruction compliance standards I am familiar with allow for cryptographic erasure when the device is encrypted prior to sensitive data being written to it (excluding some specific data-sensitivity levels).

If they are strict enough to not allow for cryptographic erasure (or the data is above a specific sensitivity), this device would likely not be in compliance either -- physical destruction generally requires shredding/grinding to a specific particulate size, or incineration, and this device does not appear to do either.


I'm not saying there are many (any?) modern standards that would allow physical destruction without cryptographic erasure. As far as I know, physical destruction requirements are usually accompanied by cryptographic erasure requirements.

I'm also not saying that all compliance standards related to data security require physical destruction; just that these absolutely exist, mostly in defense and similar areas.


Most standards (e.g. ISO 27001, NIST 800-88) do allow for physical destruction without cryptographic erasure if the device is being shredded or incinerated (to the applicable shredding/incineration standard of particulate size/temperature). Especially because cryptographic erasure is effectively pointless (at high data-sensitivity levels) if the device wasn't encrypted immediately and prior to data being written. Notably, NIST 800-88 2.6 explains when not to use cryptographic erasure, and when to consider it, but there is no requirement for it.

But, I mainly made my comment in reply to this part of your comment:

>I’d assume this device targets that market.

Because I don't think there is any market where this SSD punching device would be compliant and cryptographic erasure wouldn't be compliant. At least, in my career, I have not seen any environment or standard where this would be considered compliant but cryptographic erasure wouldn't be.


Right, but nobody's arguing that there are cases where you'd physically destroy a device, while cryptographic erasure of the data is not required as well.

I didn't explicitly say this in my original comment since it seemed implicit given the context.


>nobody's arguing that there are cases where you'd physically destroy a device, while cryptographic erasure of the data is not required as well.

I am very explicitly saying cryptographic erasure is not required if you are following physical destruction standards (in ISO 27001 and NIST 800-88, at least).


This isn't necessarily sufficient unless you encrypt the drives before any data is written to them. If any potentially sensitive data has been written to the drive prior to encryption, the only 100% method is physical destruction.

Of course, this clarification only matters if your threat model involves dealing with top-secret data and/or nation-state enemies.


I don't know, personally, I would be very unhappy if someone stole my server and then starts blackmailing me to reveal private information somewhere (unless I pay a certain sum). I don't have anything to hide, but I still don't want my private information public. I don't need to mind about this with encrypted data.


>I don't need to mind about this with encrypted data.

I'm not sure if I wasn't clear or if you didn't read my comment correctly.

Encrypting is not enough to prevent data recovery if data was written to disk prior to encrypting it.

In other words, if you want to be 100% sure about your data being safe, you must encrypt first (when the drive is brand new), or you must physically destroy the drive.


Yes, I understood - but this has nothing to do with encryption. Data that is encrypted is save. Any data that is not encrypted (or was not encrypted) would offer an attack surface. Since I use ZFS for all my data, all my data is encrypted from Minute 1 of a new hard drive.


Format, then sdelete x 10 passes writing random data, then secure erase for good measure, will take care of it for 99% of use cases out there.


Sure, if you don't need to meet any compliance standards and your threat model is pretty relaxed, this is likely okay.

But if your threat model is that relaxed, you can just encrypt the whole drive, toss the key, and then format the device. This would likely be quicker than doing 10x write passes.

As a note, write passes are really only good for HDDs due to wear-leveling algorithms in every SSD.


More precisely, EDR (somtimes EDTR -- endpoint detection and threat response) is one component of a robust endpoint protection platform.

EPPs will consist of threat detection and response (EDR), as well as proactive prevention, vulnerability management, threat intelligence, data-loss prevention, encryption management, etc.


It's mentioned both in the article, and in the comments here where someone else thought it isn't mentioned.

They didn't use the acronym ("LISA"), but instead spelled out the entire thing.

>Researchers are now working on several next-generation LIGO-type observatories, both on Earth and, in space, the Laser Interferometer Space Antenna; [...]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: