Ok, for years I've been planning on rewriting Satori in python (or something else) and never have gotten around to it. Well 2 weeks ago I started playing with pyshark while working with SMB packets for FOR572 class. More on that project in the future, but it got me thinking, why go to all of the headache of writing new code to parse all those packets, instead use the power of tshark, via pyshark.
So with that said, I really do plan on Satori 2.0 (or would it be 1.0 since I never made it out of the 0.7x arena). The future releases of Satori will be pyshark/python based with tshark on the backend to do the heavy lifting. I plan on just coding enough to pull the needed info and query the underlying .xml files for fingerprint data. This will get it off the ground again, though may not make it as fast as it could be, but trade offs, it is that or I probably never get back to it :)
I'm not sure I can do everything I was doing with Satori before, but I can easily do dhcp, http agent string, and some of the smb stuff I was doing.
I'm thinking about adding some SSL fingerprinting to it also.
All of this to say, evidently Satori isn't dead from my end! Just taken a bit of a break.
Tuesday, January 13, 2015
In the past few weeks Microsoft has appeared a bit peeved with Google's disclosure policy. They had a patched planned for Patch Tuesday (today) and the information about it hit the 90 day mark back at the end of December and was released by Google.
There have been a number of threads going on the different lists I'm on, some supporting this saying Microsoft knew their policy and knew the dead line, while others upset at google who knew a patch was in the plans and released the data anyway.
I've been back and forth on Full Disclosure vs Responsible Disclosure over the years. I see both sides and understand the needs. I do believe the security researchers that find these bugs and push the vendors to get patches out the door are important, but I also believe a lot of these researchers (not all, but a lot) haven't had to support large organizations and deal with the "headache" these things cause.
In the end, supporting or trying to secure a large organization is tough to start with, made tougher by the numerous pieces of software and hardware that may be out there and made even tougher when you don't have total control over what is on your network (at least in the .edu space). Add to that screwed up patches that get pulled and 3rd parties disclosing things "days" before a patch is due out, its almost enough to make you pull your hair out some days.
Microsoft has the problem of trying to make sure that things are backwards compatible, supporting things from 10+ years ago. Google on the other hand just drives forward with a new OS and dropping support for older ones.
Case in point:
As we see more and more devices built on the Android OS and their support just ending, due to carriers not doing upgrades, or whatever, it will be interesting to see how things plays out in the future. Will Google actually be the one that takes the heat or will the carrier (from the public). Or will the idea of just throwing the old equipment away and constantly upgrading continue to be the norm?
I'm still running an old 2.3.x "smart" phone. I don't surf the net with it, it gives me phone access and it gives me my calendar stuff. It works for what I need, but I know its limitations and security implications if I surf the web with it. How many users do? Should we really be forced to spend that much every 2 years to replace older tech? Maybe things change constantly, it isn't like when I started and we used to get new AV definitions every 6 months anymore :) But I hate to see us continue to throw away perfectly working tech that could be patched.
Oh well, I digress. Another Patch Tuesday is upon us and another will come next month. Changes will continue to happen and those that have to support systems will continue to adapt or short of that move on to other things!