A New Mindcraft Second

From Trade Britannica
Jump to: navigation, search

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (visitor, #24616) [Link]



1. this WP article was the fifth in a sequence of articles following the safety of the web from its beginnings to related topics of right now. discussing the safety of linux (or lack thereof) fits nicely in there. it was additionally a effectively-researched article with over two months of analysis and interviews, one thing you can't fairly claim your self in your recent pieces on the topic. you don't just like the details? then say so. and even higher, do one thing constructive about them like Kees and others have been trying. nevertheless silly comparisons to outdated crap just like the Mindcraft studies and fueling conspiracies do not exactly help your case. 2. "We do a reasonable job of discovering and fixing bugs." let's begin right here. is this statement based mostly on wishful thinking or cold exhausting info you're going to share in your response? in keeping with Kees, the lifetime of safety bugs is measured in years. that's more than the lifetime of many devices folks buy and use and ditch in that interval. 3. "Issues, whether they're security-associated or not, are patched quickly," some are, some aren't: let's not overlook the latest NMI fixes that took over 2 months to trickle right down to stable kernels and we also have a consumer who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-techniques.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is trying to upstream, imagine the shitstorm if bugreports shall be handled with this attitude, let's hope btrfs guys are an exception, not the rule). anyway, two examples are usually not statistics, so as soon as again, do you've gotten numbers or is it all wishful considering? (it is partly a trick query as a result of you'll even have to explain how something gets to be determined to be security related which as everyone knows is a messy enterprise in the linux world) 4. "and the stable-replace mechanism makes those patches available to kernel customers." besides when it does not. and sure, i've numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "In particular, the few developers who're working in this space have by no means made a serious attempt to get that work built-in upstream." you do not should be shy about naming us, after all you did so elsewhere already. and we additionally explained the the explanation why we have not pursued upstreaming our code: https://lwn.net/Articles/538600/ . since i do not expect you and your readers to learn any of it, here is the tl;dr: if you'd like us to spend hundreds of hours of our time to upstream our code, you'll have to pay for it. no ifs no buts, that is how the world works, that is how >90% of linux code will get in too. i personally discover it pretty hypocritic that well paid kernel builders are bitching about our unwillingness and inability to serve them our code on a silver platter free of charge. and before someone brings up the CII, go test their mail archives, after some initial exploratory discussions i explicitly requested them about supporting this lengthy drawn out upstreaming work and got no answers.



Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]



Cash (aha) quote : > I propose you spend none of your free time on this. Zero. I suggest you get paid to do that. And well. Nobody count on you to serve your code on a silver platter without spending a dime. The Linux foundation and huge firms utilizing Linux (Google, Purple Hat, Oracle, Samsung, etc.) should pay safety specialists like you to upstream your patchs.



Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]



I would just prefer to point out that the way you phrased this makes your comment a tone argument[1][2]; you've got (in all probability unintentionally) dismissed all the guardian's arguments by pointing at its presentation. The tone of PAXTeam's remark shows the frustration built up over the years with the way in which issues work which I think ought to be taken at face value, empathized with, and understood reasonably than merely dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers,



Posted Nov 7, 2015 0:Fifty five UTC (Sat) by josh (subscriber, #17465) [Link]



Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Link]



why, is upstream known for its fundamental civility and decency? have you even read the WP post under discussion, by no means thoughts previous lkml traffic?



Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]



Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Link]



No Argument



Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Link]



Please do not; it doesn't belong there both, and it particularly doesn't want a cheering section because the tech press (LWN usually excepted) tends to provide.



Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (guest, #58961) [Link]



Okay, but I was thinking of Linus Torvalds



Posted Nov 8, 2015 16:Eleven UTC (Sun) by pbonzini (subscriber, #60935) [Link]



Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]



Why must you assume only money will repair this downside? Sure, I agree extra sources should be spent on fixing Linux kernel safety points, however don't assume somebody giving a corporation (ahem, PAXTeam) money is the one solution. (Not mean to impugn PAXTeam's security efforts.)



The Linux growth neighborhood may have had the wool pulled over its collective eyes with respect to security issues (either real or perceived), but merely throwing money at the issue won't fix this.



And yes, I do realize the business Linux distros do lots (most?) of the kernel development as of late, and that implies oblique monetary transactions, however it is much more concerned than just that.



Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Hyperlink]



Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (guest, #24616) [Link]



Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Link]



I believe you definitely agree with the gist of Jon's argument... not sufficient focus has been given to safety within the Linux kernel... the article gets that half right... cash hasn't been going in the direction of security... and now it must. Aren't you glad?



Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (visitor, #24616) [Link]



they talked to spender, not me personally, but yes, this aspect of the coin is well represented by us and others who were interviewed. the identical method Linus is an efficient representative of, well, his personal pet project known as linux. > And if Jon had only talked to you, his would have been too. given that i am the creator of PaX (a part of grsec) yes, talking to me about grsec matters makes it one of the best methods to analysis it. but if you realize of someone else, be my guest and name them, i'm fairly sure the recently formed kernel self-protection folks can be dying to have interaction them (or not, i do not assume there's a sucker on the market with 1000's of hours of free time on their hand). > [...]it also contained quite a couple of of groan-worthy statements. nothing is ideal however contemplating the viewers of the WP, that is one among the better journalistic pieces on the topic, no matter the way you and others don't like the sorry state of linux security uncovered in there. if you want to discuss more technical details, nothing stops you from speaking to us ;). talking of your complaints about journalistic qualities, since a earlier LWN article saw it fit to include a number of typical dismissive claims by Linus about the standard of unspecified grsec options with no proof of what experience he had with the code and how latest it was, how come we did not see you or anybody else complaining about the standard of that article? > Aren't you glad? no, or not yet anyway. i've heard a lot of empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless train of fixing individual bugs and associated circus (that Linus rightfully despises FWIW).



Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Hyperlink]



Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Link]



Proper now we have bought builders from massive names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Sadly, the encompassing cultural attitude of developers is to hit functional goals, and occasionally efficiency objectives. Safety targets are often neglected. Ideally, the tradition would shift so that we make it troublesome to comply with insecure habits, patterns or paradigms -- that is a activity that can take a sustained effort, not merely the upstreaming of patches. Regardless of the tradition, these patches will go upstream finally anyway because the ideas that they embody at the moment are well timed. I can see a solution to make it occur: Linus will accept them when an enormous end-consumer (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here's a set of improvements, we're already using them to solve this sort of problem, here is how every thing will stay working as a result of $proof, be aware rigorously that you're staring down the barrels of a fork because your tree is now evolutionarily disadvantaged'. It is a sport and might be gamed; I would want that the neighborhood shepherds users to observe the pattern of declaring problem + resolution + functional check proof + efficiency test evidence + safety check proof. K3n.



Posted Nov 9, 2015 6:Forty nine UTC (Mon) by jospoortvliet (visitor, #33164) [Link]



And about that fork barrel: I'd argue it's the other method around. Google forked and misplaced already.



Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Link]



Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]



Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link]



So I have to confess to a certain quantity of confusion. I could swear that the article I wrote stated exactly that, but you've got put a fair quantity of effort into flaming it...?



Posted Nov 8, 2015 1:34 UTC (Solar) by PaXTeam (visitor, #24616) [Link]



Posted Nov 6, 2015 22:52 UTC (Fri) by flussence (subscriber, #85566) [Link]



I personally suppose you and Nick Krause share opposite sides of the same coin. Programming capability and fundamental civility.



Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]



Posted Nov 7, 2015 0:Sixteen UTC (Sat) by rahvin (visitor, #16953) [Hyperlink]



I hope I'm mistaken, however a hostile angle is not going to help anyone get paid. It is a time like this where one thing you appear to be an "professional" at and there is a demand for that experience the place you show cooperation and willingness to take part because it's an opportunity. I am comparatively shocked that somebody doesn't get that, however I'm older and have seen a few of those alternatives in my career and exploited the hell out of them. You only get just a few of those in the typical profession, and handful at the most. Typically you have to invest in proving your abilities, and that is one of those moments. It seems the Kernel community may lastly take this safety lesson to heart and embrace it, as mentioned within the article as a "mindcraft second". This is an opportunity for builders which will want to work on Linux safety. Some will exploit the opportunity and others will thumb their noses at it. In the end those builders that exploit the chance will prosper from it. I really feel old even having to jot down that.



Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Maybe there is a chicken and egg downside right here, but when seeking out and funding individuals to get code upstream, it helps to pick out folks and teams with a historical past of having the ability to get code upstream. It's perfectly reasonable to desire understanding of tree, providing the power to develop impressive and critical security advances unconstrained by upstream necessities. That is work someone might also want to fund, if that meets their wants.



Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]



You make this argument (implying you do research and Josh doesn't) after which fail to assist it by any cite. It would be much more convincing should you quit on the Onus probandi rhetorical fallacy and really cite facts. > case in point, it was *them* who urged that they would not fund out-of-tree work but would consider funding upstreaming work, besides when pressed for the main points, all i acquired was silence. For these following along at house, this is the relevant set of threads: http://lists.coreinfrastructure.org/pipermail/cii-talk about... A fast precis is that they advised you your undertaking was unhealthy as a result of the code was never going upstream. You advised them it was due to kernel developers angle so they need to fund you anyway. They informed you to submit a grant proposal, you whined more in regards to the kernel attitudes and eventually even your apologist instructed you that submitting a proposal may be the best thing to do. At that time you went silent, not vice versa as you suggest above. > obviously i won't spend time to jot down up a begging proposal just to be advised that 'no sorry, we do not fund multi-12 months projects in any respect'. that's something that one should be advised in advance (or heck, be part of some public guidelines so that others will know the principles too). You appear to have a fatally flawed grasp of how public funding works. If you do not inform individuals why you want the money and the way you'll spend it, they're unlikely to disburse. Saying I am sensible and I know the issue now hand over the cash does not even work for most Academics who have a solid repute in the field; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)? jejb@jarvis> git log|grep -i 'Author: pax.*group'|wc -l 1 Stellar, I need to say. And earlier than you gentle off on those who have misappropriated your credit score, please do not forget that getting code upstream on behalf of reluctant or incapable actors is a hugely beneficial and time consuming ability and certainly one of the reasons teams like Linaro exist and are well funded. If more of your stuff does go upstream, it will be due to the not inconsiderable efforts of other people in this space. You now have a enterprise model promoting non-upstream security patches to clients. There's nothing wrong with that, it is a fairly typical first stage business mannequin, nevertheless it does relatively depend on patches not being upstream in the primary place, calling into query the earnestness of your attempt to put them there. Now here is some free recommendation in my discipline, which is aiding firms align their businesses in open source: The selling out of tree patch route is at all times an eventual failure, notably with the kernel, because if the functionality is that useful, it gets upstreamed or reinvented in your regardless of, leaving you with nothing to promote. In case your marketing strategy B is selling expertise, you could have to remember that it is going to be a tough sell when you have no out of tree differentiator left and git historical past denies that you had anything to do with the in-tree patches. Actually "crazy safety particular person" will grow to be a self fulfilling prophecy. The recommendation? it was apparent to everybody else who read this, however for you, it's do the upstreaming your self before it will get completed for you. That way you could have a legitimate historical claim to Plan B and also you might even have a Plan A selling a rollup of upstream monitor patches built-in and delivered before the distributions get around to it. Even your application to the CII couldn't be dismissed because your work wasn't going anywhere. Your alternative is to proceed playing the role of Cassandra and possibly undergo her eventual fate.



Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]



> Second, for the potentially viable pieces this can be a multi-year > full time job. Is the CII willing to fund initiatives at that stage? If not > we all would find yourself with plenty of unfinished and partially damaged options. please present me the reply to that question. with no definitive 'sure' there is no point in submitting a proposal as a result of that is the time-frame that for my part the job will take and any proposal with that requirement would be shot down instantly and be a waste of my time. and that i stand by my declare that such simple basic requirements must be public data. > Stellar, I have to say. "Lies, damned lies, and statistics". you realize there's more than one option to get code into the kernel? how about you utilize your git-fu to search out all of the bugreports/recommended fixes that went in as a consequence of us? as for particularly me, Greg explicitly banned me from future contributions by way of af45f32d25cc1 so it's no marvel i do not send patches directly in (and that one commit you found that went in regardless of mentioned ban is definitely a really bad example because additionally it is the one that Linus censored for no good cause and made me resolve to by no means ship safety fixes upstream until that apply changes). > You now have a business model selling non-upstream security patches to prospects. now? we've had paid sponsorship for our numerous stable kernel series for 7 years. i wouldn't call it a enterprise mannequin although as it hasn't paid anybody's bills. > [...]calling into query the earnestness of your try to put them there. i should be missing something here but what attempt? i've by no means in my life tried to submit PaX upstream (for all the reasons discussed already). the CII mails had been exploratory to see how serious that whole group is about actually securing core infrastructure. in a sense i've bought my solutions, there's nothing more to the story. as for your free recommendation, let me reciprocate: advanced problems don't clear up themselves. code fixing advanced problems does not write itself. individuals writing code solving complicated problems are few and far between that you'll discover out briefly order. such individuals (area experts) do not work free of charge with few exceptions like ourselves. biting the hand that feeds you will only finish you up in hunger. PS: since you are so sure about kernel developers' skill to reimplement our code, possibly have a look at what parallel options i still maintain in PaX regardless of vanilla having a 'totally-not-reinvented-right here' implementation and check out to know the rationale. or simply look at all the CVEs that affected say vanilla's ASLR but didn't affect mine. PPS: Cassandra by no means wrote code, i do. criticizing the sorry state of kernel security is a aspect project when i am bored or just waiting for the following kernel to compile (i want LTO was extra efficient).



Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Hyperlink]



In different phrases, you tried to define their course of for them ... I can't suppose why that would not work. > "Lies, damned lies, and statistics". The problem with advert hominem attacks is that they are singularly ineffective towards a transparently factual argument. I posted a one line command anybody could run to get the number of patches you've authored within the kernel. Why do not you submit an equal that gives figures you like more? > i've never in my life tried to submit PaX upstream (for all the reasons mentioned already). So the grasp plan is to reveal your experience by the number of patches you have not submitted? nice plan, world domination beckons, sorry that one bought away from you, however I am certain you will not let it happen once more.



Posted Nov 8, 2015 2:56 UTC (Sun) by PaXTeam (guest, #24616) [Link]



what? since when does asking a question outline something? isn't that how we find out what another person thinks? is not that what *they* have that webform (never thoughts the mailing lists) for as well? in different words you admit that my question was not truly answered . > The problem with advert hominem attacks is that they're singularly ineffective towards a transparently factual argument. you didn't have an argument to start with, that's what i explained within the half you rigorously selected not to quote. i'm not here to defend myself in opposition to your clearly idiotic makes an attempt at proving no matter you're making an attempt to prove, as they are saying even in kernel circles, code speaks, bullshit walks. you possibly can have a look at mine and determine what i can or can not do (not that you've the data to grasp most of it, thoughts you). that said, there're clearly other extra succesful individuals who've accomplished so and decided that my/our work was value something else no person would have been feeding off of it for the past 15 years and nonetheless counting. and as unimaginable as it may seem to you, life would not revolve around the vanilla kernel, not everybody's dying to get their code in there particularly when it means to put up with such silly hostility on lkml that you simply now also demonstrated here (it's ironic the way you came to the defense of josh who particularly asked individuals to not carry that notorious lkml type here. nice job there James.). as for world domination, there're some ways to realize it and something tells me that you're clearly out of your league here since PaX has already achieved that. you're running such code that implements PaX features as we speak.



Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]



I posted the one line git script giving your authored patches in response to this original request by you (this one, simply in case you've forgotten http://lwn.web/Articles/663591/): > as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)? I take it, by the way you have shifted floor within the earlier threads, that you simply wish to withdraw that request?



Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 8, 2015 22:31 UTC (Solar) by pizza (subscriber, #46) [Hyperlink]



Please present one that is not improper, or less improper. It should take much less time than you have already wasted here.



Posted Nov 8, 2015 22:49 UTC (Solar) by PaXTeam (visitor, #24616) [Link]



anyway, since it is you guys who've a bee in your bonnet, let's test your degree of intelligence too. first determine my e mail address and project name then strive to find the commits that say they come from there (it brought again some reminiscences from 2004 already, how times flies! i'm stunned i actually managed to accomplish this a lot with explicitly not attempting, imagine if i did :). it's an extremely advanced process so by accomplishing it you'll prove your self to be the top canine here on lwn, whatever that's price ;).



Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Hyperlink]



*shrug* Or don't; you are solely sullying your individual status.



Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (guest, #33164) [Link]



Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink]



I would not both



Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Hyperlink]



Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (guest, #62367) [Hyperlink]



Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Hyperlink]



Ah. I assumed my memory wasn't failing me. Evaluate to PaXTeam's response to <http: lwn.net articles 663612 />. PaXTeam is not averse to outright lying if it means he will get to appear proper, I see. Perhaps PaXTeam's memory is failing, and this obvious contradiction is not a brazen lie, however on condition that the two posts were made inside a day of each other I doubt it. (PaXTeam's complete unwillingness to assume good faith in others deserves some reflection. Yes, I *do* assume he's lying by implication here, and doing so when there's almost nothing at stake. God alone knows what he is prepared to stoop to when one thing *is* at stake. Gosh I wonder why his fixes aren't going upstream very quick.)



Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (guest, #24616) [Link]



> and that one commit you found that went in despite said ban also someone's ban doesn't suggest it will translate into someone else's execution of that ban as it's clear from the commit in query. it's somewhat unhappy that it takes a security fix to expose the fallacy of this coverage though. the rest of your pithy ad hominem speaks for itself better than i ever may ;).



Posted Nov 12, 2015 15:Fifty eight UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]



Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Link]



I don't see this message in my mailbox, so presumably it received swallowed.



Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]



You're conscious that it is entirely attainable that everyone is wrong right here , proper? That the kernel maintainers have to focus extra on safety, that the article was biased, that you are irresponsible to decry the state of security, and do nothing to help, and that your patchsets would not help that much and are the unsuitable path for the kernel? That just because the kernel maintainers aren't 100% right it does not imply you might be?



Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Hyperlink]



I think you have got him backwards there. Jon is evaluating this to Mindcraft as a result of he thinks that despite being unpalatable to a whole lot of the community, the article may in fact include a number of truth.



Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Link]



Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (guest, #23067) [Hyperlink]



"There are rumors of darkish forces that drove the article in the hopes of taking Linux down a notch. All of this might well be true" Just as you criticized the article for mentioning Ashley Madison even though in the very first sentence of the next paragraph it mentions it did not involve the Linux kernel, you cannot give credence to conspiracy theories with out incurring the identical criticism (in different phrases, you can't play the Glenn Beck "I'm simply asking the questions here!" whose "questions" gasoline the conspiracy theories of others). Very similar to mentioning Ashley Madison as an example for non-technical readers about the prevalence of Linux on this planet, if you're criticizing the mention then mustn't likening a non-FUD article to a FUD article also deserve criticism, especially given the rosy, self-congratulatory picture you painted of upstream Linux safety? As the PaX Crew identified in the preliminary post, the motivations aren't onerous to know -- you made no point out in any respect about it being the fifth in an extended-working collection following a pretty predictable time trajectory. No, we didn't miss the overall analogy you have been attempting to make, we simply do not think you can have your cake and eat it too. -Brad



Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Hyperlink]



Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]



It is gracious of you not to blame your readers. I determine they're a fair target: there's that line about those ignorant of history being condemned to re-implement Unix -- as your readers are! :-) K3n.



Posted Nov 9, 2015 18:Forty three UTC (Mon) by bojan (subscriber, #14302) [Link]



Sadly, I don't understand neither the "safety" of us (PaXTeam/spender), nor the mainstream kernel folks by way of their attitude. I confess I have completely no technical capabilities on any of these topics, but when they all determined to work collectively, as an alternative of having infinite and pointless flame wars and blame sport exchanges, lots of the stuff would have been performed already. And all of the whereas everybody concerned could have made one other huge pile of cash on the stuff. They all seem to wish to have a greater Linux kernel, so I've obtained no idea what the problem is. Plainly no one is prepared to yield any of their positions even a little bit bit. As a substitute, both sides seem like bent on attempting to insult their method into forcing the other facet to surrender. Which, of course, never works - it simply causes more pushback. Perplexing stuff...



Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Link]



Posted Nov 9, 2015 19:Forty four UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]



Take a scientific computational cluster with an "air gap", as an illustration. You'd in all probability need most of the security stuff turned off on it to gain maximum efficiency, because you possibly can belief all customers. Now take a number of billion cell phones that may be difficult or gradual to patch. You'd probably want to kill most of the exploit lessons there, if these gadgets can still run fairly nicely with most security options turned on. So, it isn't both/or. It's most likely "it depends". But, if the stuff is not there for everyone to compile/use in the vanilla kernel, it will likely be harder to make it part of everyday decisions for distributors and customers.



Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Link]



How unhappy. This Dijkstra quote comes to thoughts instantly: Software engineering, of course, presents itself as one other worthy cause, however that's eyewash: in case you fastidiously read its literature and analyse what its devotees actually do, you will uncover that software program engineering has accepted as its charter "How you can program if you can't."



Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]



I guess that truth was too unpleasant to suit into Dijkstra's world view.



Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Link]



Certainly. And the fascinating thing to me is that when I reach that point, exams will not be ample - mannequin checking at a minimum and really proofs are the one approach forwards. I'm no security professional, my subject is all distributed programs. I perceive and have applied Paxos and i consider I can explain how and why it really works to anyone. However I am at the moment doing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No take a look at is adequate as a result of there are infinite interleavings of events and my head just could not cope with working on this both at the computer or on paper - I found I could not intuitively purpose about this stuff at all. So I began defining the properties and needed and step-by-step proving why every of them holds. Without my notes and proofs I can not even clarify to myself, let alone anyone else, why this thing works. I discover this both fully obvious that this can occur and completely terrifying - the upkeep cost of these algorithms is now an order of magnitude greater.



Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link]



> Certainly. And the fascinating thing to me is that once I reach that point, assessments are usually not ample - mannequin checking at a minimum and actually proofs are the one means forwards. Or are you simply utilizing the flawed maths? Hobbyhorse time again :-) however to quote a fellow Pick developer ... "I often walk into a SQL development shop and see that wall - you know, the one with the huge SQL schema that no-one totally understands on it - and surprise how I can simply hold your complete schema for a Decide database of the identical or better complexity in my head". However it's easy - by training I am a Chemist, by interest a Bodily Chemist (and by profession an unemployed programmer :-). And when I'm interested by chemistry, I can ask myself "what is an atom made of" and assume about issues just like the strong nuclear drive. Next degree up, how do atoms stick together and make molecules, and suppose about the electroweak force and electron orbitals, and the way do chemical reactions occur. Then I feel about molecules stick together to make materials, and assume about metals, and/or Van de Waals, and stuff. Point is, it's essential to *layer* stuff, and have a look at things, and say "how can I break up parts off into 'black boxes' so at anyone stage I can assume the opposite ranges 'just work'". For instance, with Decide a FILE (desk to you) stores a category - a group of similar objects. One object per Report (row). And, identical as relational, one attribute per Area (column). Can you map your relational tables to reality so easily? :-) Going again THIRTY years, I remember a story about a guy who constructed little pc crabs, that could quite happily scuttle round in the surf zone. As a result of he didn't try to work out how to resolve all the issues without delay - every of his (incredibly puny by at the moment's standards - this is the 8080/Z80 period!) processors was set to only course of just a little bit of the problem and there was no central "brain". But it surely worked ... Maybe it's best to simply write a bunch of small modules to unravel each individual problem, and let ultimate reply "simply happen". Cheers, Wol



Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (visitor, #60862) [Link]



To my understanding, this is precisely what a mathematical abstraction does. For example in Z notation we'd construct schemas for the various modifying ("delta") operations on the bottom schema, and then argue about preservation of formal invariants, properties of the outcome, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A through O (for which they've been already argued). The outcome is a set of operations that, executed in arbitrary order, lead to a set of properties holding for the result and outputs. Thus proving the formal design appropriate (w/ caveat lectors regarding scope, correspondence with its implementation [though that can be confirmed as effectively], and browse-only ["xi"] operations).



Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]



Trying by way of the historical past of computing (and possibly plenty of other fields too), you will most likely discover that individuals "can't see the wood for the trees" extra typically that not. They dive into the element and completely miss the massive picture. (Medicine, and curiosity of mine, suffers from that too - I remember any individual speaking in regards to the marketing consultant eager to amputate a gangrenous leg to avoid wasting somebody's life - oblivious to the truth that the affected person was dying of cancer.) Cheers, Wol



Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Hyperlink]



https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Considered Harmful") FWIW, I believe that this talk could be very related to why writing safe software is so laborious.. -Dave.



Posted Nov 7, 2015 5:49 UTC (Sat) by kunitz (subscriber, #3965) [Link]



While we are spending millions at a mess of safety problems, kernel points should not on our high-precedence checklist. Truthfully I remember solely once having discussing a kernel vulnerability. The result of the analysis has been that every one our techniques had been operating kernels that were older because the kernel that had the vulnerability. However "patch administration" is a real situation for us. Software must continue to work if we set up security patches or replace to new releases due to the end-of-life coverage of a vendor. The revenue of the company is depending on the IT systems operating. So "not breaking person house" is a safety characteristic for us, as a result of a breakage of one part of our a number of ten 1000's of Linux techniques will cease the roll-out of the safety update. One other drawback is embedded software program or firmware. Lately virtually all hardware systems embody an operating system, typically some Linux model, offering a fill network stack embedded to help distant management. Usually these techniques don't survive our obligatory security scan, because distributors nonetheless didn't replace the embedded openssl. The actual challenge is to supply a software stack that can be operated within the hostile setting of the Internet maintaining full system integrity for ten years or even longer with none buyer maintenance. The current state of software engineering would require assist for an automated update process, but vendors should perceive that their enterprise model must have the ability to finance the assets providing the updates. Overall I'm optimistic, networked software will not be the first know-how utilized by mankind causing issues that were addressed later. Steam engine use might result in boiler explosions but the "engineers" had been ready to cut back this danger significantly over just a few a long time.



Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]



The following is all guess work; I'd be eager to know if others have proof both one way or another on this: The people who learn how to hack into these methods by means of kernel vulnerabilities know that they abilities they've learnt have a market. Thus they don't are inclined to hack with a view to wreak havoc - indeed on the whole where data has been stolen with the intention to release and embarrass people, it _seems_ as though those hacks are by way of much less complicated vectors. I.e. lesser skilled hackers find there's a complete load of low-hanging fruit which they will get at. They don't seem to be being paid ahead of time for the information, so they turn to extortion instead. They don't cover their tracks, and they can often be discovered and charged with criminal offences. So if your safety meets a certain primary level of proficiency and/or your organization is not doing anything that places it close to the highest of "firms we might wish to embarrass" (I think the latter is much simpler at conserving programs "secure" than the previous), then the hackers that get into your system are more likely to be skilled, paid, and doubtless not going to do a lot injury - they're stealing information for a competitor / state. So that doesn't bother your bottom line - at least not in a manner which your shareholders will bear in mind of. So why fund safety?



Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (guest, #82661) [Link]



Alternatively, some efficient mitigation in kernel degree would be very useful to crush cybercriminal/skiddie's try. If considered one of your customer working a future buying and selling platform exposes some open API to their clients, and if the server has some reminiscence corruption bugs might be exploited remotely. Then you already know there are identified attack strategies( equivalent to offset2lib) can help the attacker make the weaponized exploit so much easier. Will you explain the failosophy "A bug is bug" to your buyer and inform them it would be ok? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp. To essentially the most business uses, extra security mitigation within the software program will not value you extra funds. You'll still must do the regression take a look at for every improve.



Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]



Keep in mind that I specialise in external web-based mostly penetration-assessments and that in-home checks (local LAN) will possible yield completely different results.



Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]



I keep reading this headline as "a new Minecraft moment", and pondering that perhaps they've decided to follow up the .Net factor by open-sourcing Minecraft. Oh effectively. I mean, safety is nice too, I suppose.



Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]



Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_every (subscriber, #28989) [Hyperlink]



Posted Nov 8, 2015 10:34 UTC (Solar) by jcm (subscriber, #18262) [Link]



Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]



Posted Nov 9, 2015 15:Fifty three UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink]



(Oh, and I was also nonetheless questioning how Minecraft had taught us about Linux efficiency - so due to the other comment thread that pointed out the 'd', not 'e'.)



Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (guest, #4654) [Link]



I'd similar to so as to add that for my part, there is a common downside with the economics of computer security, which is particularly visible presently. Two issues even maybe. First, the cash spent on pc safety is commonly diverted in direction of the so-called security "circus": quick, straightforward options which are primarily selected just with a purpose to "do one thing" and get higher press. It took me a long time - possibly many years - to claim that no safety mechanism at all is best than a foul mechanism. However now I firmly consider on this perspective and would relatively take the chance knowingly (provided that I can save money/resource for myself) than take a nasty method at solving it (and don't have any money/resource left after i understand I should have accomplished something else). And that i find there are many dangerous or incomplete approaches currently available in the computer security field. These spilling our rare money/sources on prepared-made useless instruments should get the dangerous press they deserve. And, we definitely must enlighten the press on that as a result of it's not so easy to understand the efficiency of protection mechanisms (which, by definition, ought to stop issues from happening). Second, and that may be more recent and extra worrying. The stream of cash/useful resource is oriented in the route of assault tools and vulnerabilities discovery a lot greater than within the route of latest protection mechanisms. This is particularly worrying as cyber "protection" initiatives look increasingly like the standard idustrial projects aimed toward producing weapons or intelligence programs. Furthermore, unhealthy ineffective weapons, because they're solely working in opposition to our very susceptible present techniques; and bad intelligence programs as even fundamental college-degree encryption scares them down to useless. However, all the ressources are for these grownup teenagers playing the white hat hackers with not-so-troublesome programming tricks or network monitoring or WWI-level cryptanalysis. And now additionally for the cyberwarriors and cyberspies that have yet to show their usefulness entirely (particularly for peace protection...). Personnally, I would happily go away them all of the hype; however I will forcefully claim that they don't have any right in any respect on any of the budget allocation selections. Solely those engaged on protection should. And yep, it means we should decide the place to place there resources. We have now to say the unique lock for ourselves this time. (and I suppose the PaXteam may very well be amongst the first to learn from such a change). Whereas desirous about it, I wouldn't even go away white-hat or cyber-guys any hype in the long run. That is more publicity than they deserve. I crave for the day I will learn in the newspaper that: "Another of these ill suggested debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well-known virus program code exploiting a programmer mistake and managed nevertheless to carry one of those unfinished and dangerous quality programs, X, that we are all obliged to make use of to its knees, annoying millions of regular customers with his unfortunate cyber-vandalism. All of the protection consultants unanimously advocate that, as soon as again, the funds of the cyber-command be retargetted, or no less than leveled-off, so as to deliver more safety engineer positions in the educational area or civilian business. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional in this affair."



Hmmm - cyber-hooligans - I just like the label. Though it does not apply nicely to the battlefield-oriented variant.



Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Hyperlink]



The state of 'software program security business' is a f-ng catastrophe. Failure of the very best order. There is huge quantities of money that is going into 'cyber safety', however it is usually spent on government compliance and audit efforts. This means as an alternative of truly putting effort into correcting issues and mitigating future problems, the vast majority of the effort goes into taking existing purposes and making them conform to committee-pushed pointers with the minimal amount of effort and adjustments. Some level of regulation and standardization is completely wanted, however lay people are clueless and are completely unable to discern the distinction between somebody who has invaluable experience versus some firm that has spent tens of millions on slick advertising and marketing and 'native advertising' on giant websites and pc magazines. The people with the cash sadly solely have their own judgment to depend on when buying into 'cyber safety'. > These spilling our rare cash/sources on prepared-made ineffective tools should get the unhealthy press they deserve. There isn't a such factor as 'our rare cash/sources'. You might have your cash, I've mine. Cash being spent by some company like Redhat is their cash. Money being spent by governments is the government's cash. (you, actually, have far more management in how Walmart spends it's money then over what your authorities does with their's) > This is especially worrying as cyber "protection" initiatives look increasingly more like the standard idustrial tasks aimed at producing weapons or intelligence techniques. Furthermore, dangerous ineffective weapons, as a result of they are solely working against our very weak present programs; and dangerous intelligence techniques as even primary faculty-level encryption scares them all the way down to useless. Having safe software with strong encryption mechanisms within the fingers of the general public runs counter to the interests of most main governments. Governments, like any other for-revenue group, are primarily excited by self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Way more useful to them then trying to assist the public have a secure mechanism for making telephone calls. Particularly when these safe mechanisms interfere with data assortment efforts. Sadly you/I/us can't depend upon some magical benefactor with deep pockets to sweep in and make Linux better. It is just not going to happen. Companies like Redhat have been massively beneficial to spending assets to make Linux kernel more succesful.. nevertheless they are pushed by a the necessity to turn a profit, which means they should cater on to the the type of requirements established by their customer base. Clients for EL are usually way more centered on reducing prices associated with administration and software program development then security on the low-degree OS. Enterprise Linux customers are inclined to depend on physical, human coverage, and network security to guard their 'smooth' interiors from being uncovered to external threats.. assuming (rightly) that there is little or no they will do to really harden their systems. In fact when the selection comes between security vs comfort I am sure that most prospects will happily defeat or strip out any security mechanisms launched into Linux. On top of that when most Enterprise software is extremely dangerous. A lot so that 10 hours spent on improving a web front-end will yield more real-world security benefits then a one thousand hours spent on Linux kernel bugs for most companies. Even for 'normal' Linux customers a security bug of their Firefox's NAPI flash plugin is way more devastating and poses a massively larger danger then a obscure Linux kernel buffer over movement downside. It is simply not really necessary for attackers to get 'root' to get access to the important data... typically all of which is contained in a single user account. Finally it is as much as people such as you and myself to put the effort and cash into improving Linux safety. For both ourselves and different people.



Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (visitor, #4654) [Hyperlink]



Spilling has all the time been the case, but now, to me and in computer safety, most of the money appears spilled because of unhealthy faith. And this is mostly your money or mine: both tax-fueled governemental resources or corporate costs that are instantly reimputed on the prices of goods/software program we're instructed we are *obliged* to buy. (Have a look at corporate firewalls, home alarms or antivirus software program marketing discourse.) I think it's time to point out that there are a number of "malicious malefactors" round and that there is an actual must identify and sanction them and confiscate the sources they've by some means managed to monopolize. And i do *not* think Linus is among such culprits by the way. However I think he may be amongst those hiding their heads within the sand about the aforementioned evil actors, whereas he in all probability has extra leverage to counteract them or oblige them to reveal themselves than many people. I find that to be of brown-paper-bag level (although head-in-the-sand is by some means a brand new interpretation). In the end, I feel you are proper to say that at the moment it is only up to us individuals to strive actually to do something to improve Linux or computer safety. But I nonetheless think that I'm proper to say that this isn't regular; especially while some very critical people get very critical salaries to distribute randomly some tough to guage budgets. [1] A paradoxical situation whenever you think about it: in a site the place you might be first and foremost preoccupied by malicious people everybody ought to have factual, clear and honest behavior as the first precedence in their mind.



Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Link]



It even has a pleasant, seven line Basic-pseudo-code that describes the present state of affairs and clearly shows that we're caught in an endless loop. It does not reply the big question, although: How to put in writing better software. The sad factor is, that this is from 2005 and all the things that were clearly stupid ideas 10 years in the past have proliferated even more.



Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Link]



Be aware IMHO, we should always investigate additional why these dumb issues proliferate and get so much assist. If it's only human psychology, well, let's battle it: e.g. Mozilla has shown us that they will do wonderful things given the proper message. If we're dealing with energetic folks exploiting public credulity: let's determine and combat them. But, more importantly, let's capitalize on this data and secure *our* methods, to exhibit at a minimum (and more later on in fact). Your reference conclusion is very nice to me. "challenge [...] the conventional wisdom and the status quo": that job I would happily accept.



Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]



That rant is itself a bunch of "empty calories". The converse to the gadgets it rants about, which it's suggesting at some degree, could be as bad or worse, and indicative of the worst kind of safety considering that has put a lot of people off. Alternatively, it's only a rant that offers little of value. Personally, I feel there is not any magic bullet. Safety is and at all times has been, in human history, an arms race between defenders and attackers, and one that's inherently a trade-off between usability, dangers and prices. If there are mistakes being made, it is that we should in all probability spend more sources on defences that would block complete courses of assaults. E.g., why is the GRSec kernel hardening stuff so hard to use to regular distros (e.g. there is no reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does the whole Linux kernel run in one security context? Why are we nonetheless writing numerous software in C/C++, usually without any primary safety-checking abstractions (e.g. fundamental bounds-checking layers in between I/O and parsing layers, say)? Can hardware do extra to supply security with pace? Little question there are lots of people engaged on "block classes of assaults" stuff, the question is, why aren't there more assets directed there?



Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Hyperlink]



>There are numerous the explanation why Linux lags behind in defensive security technologies, however one in every of the key ones is that the businesses being profitable on Linux have not prioritized the event and integration of those applied sciences. This seems like a cause which is actually worth exploring. Why is it so? I believe it is not apparent why this would not get some more consideration. Is it potential that the folks with the cash are right to not more extremely prioritise this? Afterall, what curiosity do they have in an unsecure, exploitable kernel? The place there may be common trigger, linux growth will get resourced. It's been this manner for a few years. If filesystems qualify for common interest, certainly safety does. So there would not appear to be any obvious purpose why this subject doesn't get extra mainstream consideration, besides that it really already gets enough. You might say that catastrophe has not struck but, that the iceberg has not been hit. But it seems to be that the linux improvement course of is not overly reactive elsewhere.



Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]



That's an fascinating question, definitely that's what they really believe no matter what they publicly say about their dedication to security technologies. What's the really demonstrated downside for Kernel builders and the organizations that pay them, as far as I can tell there isn't adequate consequence for the lack of Safety to drive more funding, so we are left begging and cajoling unconvincingly.



Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Hyperlink]



The key issue with this domain is it pertains to malicious faults. So, when consequences manifest themselves, it is just too late to act. And if the present dedication to an absence of voluntary strategy persists, we're going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers seem pretty resistant to paranoia. That is a good factor. However I'm waiting for the times the place armed land-drones patrol US streets in the neighborhood of their kids colleges for them to find the feeling. They aren't so distants the days when innocent lives will unconsciouly rely on the safety of (linux-based) computer systems; under water, that's already the case if I remember accurately my last dive, as well as in several recent vehicles in keeping with some stories.



Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Hyperlink]



Basic hosting corporations that use Linux as an uncovered entrance-finish system are retreating from improvement whereas HPC, cellular and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their instructions. This is admittedly not that surprising: For hosting needs the kernel has been "finished" for fairly a while now. Apart from assist for present hardware there is just not much use for newer kernels. Linux 3.2, and even older, works simply fine. Hosting does not need scalability to a whole lot or hundreds of CPU cores (one makes use of commodity hardware), advanced instrumentation like perf or tracing (programs are locked down as a lot as doable) or advanced energy-administration (if the system doesn't have constant excessive load, it isn't making enough money). So why ought to hosting firms still make robust investments in kernel development? Even if Screamyguy's blog 'd one thing to contribute, the hurdles for contribution have develop into greater and higher. For their security needs, internet hosting companies already use Grsecurity. I haven't any numbers, however some experience means that Grsecurity is mainly a fixed requirement for shared hosting. Then again, kernel safety is sort of irrelevant on nodes of an excellent pc or on a system operating giant enterprise databases which are wrapped in layers of middle-ware. And cell vendors simply do not care.



Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]



Linking



Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Hyperlink]



Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]



The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am certain the system's arduous drives have been sent off for forensic examination, and we have all been waiting patiently for the answer to a very powerful query: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, right through April 1st, 2013, kernel.org included this observe at the highest of the site Information: 'Thanks to all on your endurance and understanding throughout our outage and please bear with us as we convey up the different kernel.org methods over the following few weeks. We can be writing up a report on the incident sooner or later.' (Emphasis added.) That comment was removed (together with the remainder of the positioning Information) throughout a May 2013 edit, and there hasn't been -- to my data -- a peep about any report on the incident since then. This has been disappointing. When the Debian Challenge discovered sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted a wonderful public report on precisely what occurred. Likewise, the Apache Basis likewise did the proper thing with good public autopsies of the 2010 Web site breaches. Arstechnica's Dan Goodin was nonetheless trying to comply with up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman instructed Ars that the investigation has but to be accomplished and gave no timetable for when a report is perhaps released. [...] Kroah-Hartman additionally instructed Ars kernel.org methods were rebuilt from scratch following the attack. Officials have developed new instruments and procedures since then, however he declined to say what they're. "There will probably be a report later this yr about site [sic] has been engineered, however don't quote me on when will probably be released as I am not answerable for it," he wrote. Who's responsible, then? Is anyone? Anybody? Bueller? Or is it a state secret, or what? Two years since Greg Ok-H stated there could be a report 'later this year', and four years for the reason that meltdown, nothing yet. How about some information? Rick Moen [email protected]



Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]



Less critically, note that if even the Linux mafia does not know, it should be the venusians; they are notoriously stealth of their invasions.



Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Hyperlink]



I do know the kernel.org admins have given talks about a few of the new protections which have been put into place. There aren't any more shell logins, as an alternative all the things makes use of gitolite. The totally different services are on completely different hosts. There are extra kernel.org employees now. People are using two factor identification. Another stuff. Do a seek for Konstantin Ryabitsev.



Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Link]



I beg your pardon if I was somehow unclear: That was mentioned to have been the path of entry to the machine (and that i can readily believe that, as it was also the exact path to entry into shells.sourceforge.internet, many years prior, around 2002, and into many other shared Internet hosts for many years). However that's not what's of main interest, and isn't what the forensic examine lengthy promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to take advantage of that to root entry is currently unknown and is being investigated'. Ok, folks, you have now had 4 years of investigation. What was the trail of escalation to root? (Also, different particulars that would logically be covered by a forensic examine, comparable to: Whose key was stolen? Who stole the key?) That is the sort of autopsy was promised prominently on the front web page of kernel.org, to reporters, and elsewhere for a long time (after which summarily removed as a promise from the entrance page of kernel.org, without comment, along with the rest of the location Information section, and apparently dropped). It nonetheless would be appropriate to know and share that data. Particularly the datum of whether or not the trail to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen [email protected]



Posted Nov 22, 2015 12:42 UTC (Solar) by rickmoen (subscriber, #6943) [Link]



I've done a more in-depth review of revelations that came out quickly after the break-in, and think I've found the answer, through a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell users (two days earlier than the public was knowledgeable), plus Aug. Thirty first feedback to The Register's Dan Goodin by 'two safety researchers who have been briefed on the breach': Root escalation was through exploit of a Linux kernel security gap: Per the two safety researchers, it was one both extremely embarrassing (broad-open entry to /dev/mem contents including the working kernel's picture in RAM, in 2.6 kernels of that day) and recognized-exploitable for the prior six years by canned 'sploits, certainly one of which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Other tidbits: - Site admins left the foundation-compromised Web servers running with all services nonetheless lit up, for multiple days. - Site admins and Linux Foundation sat on the knowledge and failed to inform the public for those same a number of days. - Site admins and Linux Basis have never revealed whether or not trojaned Linux supply tarballs have been posted in the http/ftp tree for the 19+ days before they took the site down. (Yes, git checkout was positive, but what about the 1000's of tarball downloads?) - After promising a report for several years after which quietly eradicating that promise from the front page of kernel.org, Linux Foundation now stonewalls press queries.I posted my best try at reconstructing the story, absent a real report from insiders, to SVLUG's major mailing checklist yesterday. (Necessarily, there are surmises. If the folks with the details have been more forthcoming, we would know what occurred for certain.) I do need to wonder: If there's another embarrassing screwup, will we even be informed about it at all? Rick Moen [email protected]



Posted Nov 22, 2015 14:25 UTC (Solar) by spender (visitor, #23067) [Hyperlink]



Also, it's preferable to use live memory acquisition previous to powering off the system, in any other case you lose out on reminiscence-resident artifacts which you can perform forensics on. -Brad



How about the long overdue autopsy on the August 2011 kernel.org compromise?



Posted Nov 22, 2015 16:28 UTC (Sun) by rickmoen (subscriber, #6943) [Link]



Thanks to your comments, Brad. I'd been counting on Dan Goodin's declare of Phalanx being what was used to achieve root, within the bit where he cited 'two security researchers who had been briefed on the breach' to that effect. Goodin additionally elaborated: 'Fellow safety researcher Dan Rosenberg said he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the primary time I've heard of a rootkit being claimed to be bundled with an attack software, and that i noted that oddity in my posting to SVLUG. That having been mentioned, yeah, the Phalanx README would not particularly claim this, so then possibly Goodin and his several 'security researcher' sources blew that detail, and no one but kernel.org insiders but knows the escalation path used to achieve root. Also, it's preferable to use live memory acquisition previous to powering off the system, in any other case you lose out on reminiscence-resident artifacts that you can carry out forensics on. Arguable, however a tradeoff; you possibly can poke the compromised reside system for state information, however with the disadvantage of leaving your system operating beneath hostile management. I used to be always taught that, on stability, it's higher to tug energy to end the intrusion. Rick Moen [email protected]



Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (guest, #88005) [Link]



Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Hyperlink]



With "one thing" you imply those who produce those closed supply drivers, proper? If the "shopper product corporations" just stuck to utilizing parts with mainlined open source drivers, then updating their products can be much simpler.



A new Mindcraft moment?



Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Link]



They've ring 0 privilege, can access protected reminiscence instantly, and can't be audited. Trick a kernel into working a compromised module and it's game over. Even tickle a bug in a "good" module, and it's most likely sport over - on this case fairly actually as such modules tend to be video drivers optimised for games ...