Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"but it still bothers me that people like this get highly paid positions to point out problems seemingly without any technical knowledge"

You are making a lot of (wrong) assumptions here. Not too many Stanford PhDs are graduating without technical knowledge



Ok, I read her Wikipedia entry and apparently she has Bachelor's and Master's degrees in electrical engineering from Stanford. Those are probably fairly standardized, so I'll concede that she does have some technical knowledge.

I have however seen way to many doctorates that seem technical while they're not. There's a common theme in her career, and it's not necessarily technical knowledge.

She worked at Microsoft research, sounds really impressive. But it was in the "Fairness, Accountability, Transparency and Ethics in AI lab". which sounds like a PR stunt. Her research was supported by Stanford DARE fellowship, which is mainly concerned with increasing diversity. She was an AI researcher at Google, but it consisted in pointing out bias in datasets. Her PhD research[1] was using street view images to find pick up trucks and finding a correlation with republican voters, which sounds more like a sociological application of computer vision than a hard technical problem.

[1]https://www.pnas.org/content/114/50/13108


When I was a grad student at Stanford she was one of the TAs for the main ML class (CS229). I can personally attest that she has solid technical skills and is quite sharp. (I got my PhD in EE at Stanford around the same time)


Ok, I'll take your word for it, I guess I was wrong about the technical skills bit. Looking at her career path I still get the impression that she's more interested in social activism within technical companies than engineering though.


I think she deserves the minimum amount of credibility that you would assign to any Stanford Phd coming out of Fei Fei Li's lab.

You have to complete all the requirements and TA grad classes after all.


Fairness, Accountability, Transparency (aka FAT) is a real sub-field of machine learning, that is as technical as any other machine learning sub-field in my opinion. It is not a PR stunt. I've published at top ML conference a paper about some CUDA kernels I wrote to accelerate training a class of RNNs, so I feel have technical grounding to make these claims.

Here are some FAT papers I've enjoyed: https://arxiv.org/abs/1806.08010 https://arxiv.org/abs/1802.04023 https://arxiv.org/abs/1803.04383 (won best paper award at ICML 2018)

Many FAT papers are published at NeurIPS or ICML (generally considered top two machine learning conferences). There's also a conference just on the topic: https://facctconference.org/


She doesn't deserve to have her work discredited with an ad hominem attack. Humans are complex multi-faceted creatures.

I have determined from your condescending tone in this thread that you do not have technical skills. See? Doesn't feel good or make much sense!


Her technical work and credentials are solid. Working on better datasets is valuable and important. What is questionable is acting as if every dataset out there that can be found to have one dimension with any bias against some race/gender deemed underprivileged is a malicious crime against humanity. When there are many many biases, cutting both ways in those datasets. When there are many many alternate explanations to this state of affairs besides malicious discrimination and oppression. When reasonable courses of action include the constructive contribution of building and using better datasets.


The fine-grained classification work associated with her thesis isn't an _easy_ technical problem. The dataset, labeling, and architecture generated a few other first-author conference papers for her besides the thesis, so it's not like she just applied an off-the-shelf model to an existing dataset...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: