Preview Mode Links will not work in preview mode

Apr 27, 2021



@pageinSec on Twitter

 

Dan Kaminsky obit: https://www.theregister.com/2021/04/25/dan_kaminsky_obituary/

 

Spencer Geitzen: http://brakeingsecurity.com/2018-024-pacu-a-tool-for-pentesting-aws-environments

 

https://en.wikipedia.org/wiki/Milgram_experiment

 

https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh@linuxfoundation.org/

 

https://cse.umn.edu/cs/statement-cse-linux-kernel-research-april-21-2021




https://www.labbott.name/blog/2021/04/21/breakingtrust.html

Seems like a number of patches were added (~190) and each had to be reviewed

https://twitter.com/UMNComputerSci/status/1384948683821694976 response to researchers

Linux Kernel mailing list: https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah.com/

https://danielmiessler.com/blog/explaining-threats-threat-actors-vulnerabilities-and-risk-using-a-real-world-scenario/

https://twitter.com/SarahJamieLewis/status/1384871385537908736

@sarahJamieLewis shows the change they submitted in their paper: https://twitter.com/SarahJamieLewis/status/1384876050207940608

https://twitter.com/SarahJamieLewis/status/1330671897822982144/photo/1

https://twitter.com/SarahJamieLewis/status/1384880034146574341/photo/1

https://web.archive.org/web/20210421145121/https://www-users.cs.umn.edu/~kjlu/papers/crix.pdf (appears the researcher deleted this paper from their site.)

https://web.archive.org/web/20210422144500/https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc.pdf (researcher deleted this paper from their site.)
“Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. We apologize for the raised concerns. This is an important lesson we learned---Do not trust ourselves on determining human research; always refer to IRB whenever a study might be involving any human subjects in any form. We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.”

https://github.com/QiushiWu/qiushiwu.github.io

NSF Grant application (thank you Page!) https://www.nsf.gov/awardsearch/showAward?AWD_ID=1931208&HistoricalAwards=false 

NSF IRB requirements (from 2007): https://www.nsf.gov/pubs/2007/nsf07006/nsf07006.jsp

Might be more recent - Human Subjects | NSF - National Science Foundation

The researchers issued an apology today 25 April: https://lore.kernel.org/lkml/CAK8KejpUVLxmqp026JY7x5GzHU2YJLPU8SzTZUNXU2OXC70ZQQ@mail.gmail.com/ *thanks to Zach Whittacker’s security mailing list..*

 

https://twitter.com/argvee

Thought provoking question for your show: is it realistically possible for an organization to build and scale a culture of code review that catches malicious insertions through (1) expert analysis; (2) adversarial mindset?

Co-author of : https://www.amazon.com/Building-Secure-Reliable-Systems-Implementing/dp/1492083127

 

Introduction of bugs (meaningful or otherwise) caused more work for devs.

Revert list of 190 patches (threaded): https://lkml.org/lkml/2021/4/21/454 

Quick overview of using deception in research from Duke’s IRB: Using Deception in Research | Institutional Review Board (duke.edu)

Is this better? Where’s the line on this?

https://www.bleepingcomputer.com/news/security/emotet-malware-nukes-itself-today-from-all-infected-computers-worldwide/