Ok, I read her Wikipedia entry and apparently she has Bachelor's and Master's degrees in electrical engineering from Stanford. Those are probably fairly standardized, so I'll concede that she does have some technical knowledge.
I have however seen way to many doctorates that seem technical while they're not. There's a common theme in her career, and it's not necessarily technical knowledge.
She worked at Microsoft research, sounds really impressive. But it was in the "Fairness, Accountability, Transparency and Ethics in AI lab". which sounds like a PR stunt. Her research was supported by Stanford DARE fellowship, which is mainly concerned with increasing diversity. She was an AI researcher at Google, but it consisted in pointing out bias in datasets. Her PhD research[1] was using street view images to find pick up trucks and finding a correlation with republican voters, which sounds more like a sociological application of computer vision than a hard technical problem.
When I was a grad student at Stanford she was one of the TAs for the main ML class (CS229). I can personally attest that she has solid technical skills and is quite sharp. (I got my PhD in EE at Stanford around the same time)
Ok, I'll take your word for it, I guess I was wrong about the technical skills bit. Looking at her career path I still get the impression that she's more interested in social activism within technical companies than engineering though.
Fairness, Accountability, Transparency (aka FAT) is a real sub-field of machine learning, that is as technical as any other machine learning sub-field in my opinion. It is not a PR stunt. I've published at top ML conference a paper about some CUDA kernels I wrote to accelerate training a class of RNNs, so I feel have technical grounding to make these claims.
Many FAT papers are published at NeurIPS or ICML (generally considered top two machine learning conferences). There's also a conference just on the topic: https://facctconference.org/
Her technical work and credentials are solid. Working on better datasets is valuable and important. What is questionable is acting as if every dataset out there that can be found to have one dimension with any bias against some race/gender deemed underprivileged is a malicious crime against humanity. When there are many many biases, cutting both ways in those datasets. When there are many many alternate explanations to this state of affairs besides malicious discrimination and oppression. When reasonable courses of action include the constructive contribution of building and using better datasets.
The fine-grained classification work associated with her thesis isn't an _easy_ technical problem. The dataset, labeling, and architecture generated a few other first-author conference papers for her besides the thesis, so it's not like she just applied an off-the-shelf model to an existing dataset...
You are making a lot of (wrong) assumptions here. Not too many Stanford PhDs are graduating without technical knowledge