LogRhythm launch tool for automated PCI compliance

LogRhythm has launched an automation suite for PCI DSS compliance.

According to the company, this helps organisations maintain continuous compliance and lower the costs of meeting PCI regulatory compliance requirements.

By doubling the number of controls that it supports or augments to 80 and adding exception-based alerting and reporting, it said that the automation suite for PCI offers a package of proactive alarms, automated behavioral profiling, extensive reporting options mapped to individual requirements and SmartResponse plug-ins for easy enforcement.

Dave Pack, director of LogRhythm Labs, said: “Achieving PCI compliance is a huge challenge for retailers and other organisations industry-wide. The PCI compliance solutions that have historically been available to organisations lack the tools that organisations really need to meet compliance requirements.

“LogRhythm's Automation Suite is an industry game changer. Its behavioural profiling and reporting capabilities give organisations peace of mind that they are always in compliance.”

Former army intelligence analyst and CISO slams proposed cyber reserve force

The proposed cyber reserve force has been criticised over a lack of preparation and being too reliant on specialists working voluntarily.

Mark Brown, director of information security at Ernst & Young, former CISO of SAB Miller and winner of the 2011 SC award for information security person of the year, said that the creation of the reserve force was not enough to deal with modern cyber issues.

While he welcomed the Government's announcements on the use of private sector capability to help the public sector tackle cyber security risks, he said that a dedicated and full time capability, fulfilling the needs of both private and public sector, working in partnership with those professionals at the ‘coal-face' in industry as well as the government nerve centres, such as GCHQ, was needed.

Brown said: “However, the creation of a cyber reserve and a UK Computer Emergency Response Team (CERT) does not go far enough. The level of threat continues to grow at a pace that cannot be met through part time action.

“Cyber criminals are redefining the term ‘organised crime' and in many respects, are more organised than the community seeking to protect businesses from cyber crime and information security. A reserve force, made up of retired information security professionals, runs the risk of being unable to keep pace with the changing technologies and risk mitigation practices necessary to maintain a strong defence.

“At the same time information security professionals employed in business are unlikely to be able to dedicate the time to provide the necessary support.”

Speaking to SC Magazine, Brown referred to the recent Ernst & Young survey, which claimed that UK firms have concentrated on short-term fixes for security problems, rather than looking at overall threats, mainly due to a lack of people with specialist security skills.

Brown said: “In that survey, 85 per cent of UK businesses feel that the information security function is not serving the needs of business. Businesses are fed up with information security not meeting business demands as there is even less time to be giving up time for the government's goal. This is not going to work.

“I understand the case, information security does take time and our team at SAB Miller worked 24/7 across the globe. If you look back at the 1990s and the move to mass outsourcing, most operational IT security was done by system integrators while now it is strategists and ‘do-ers' and the skills other companies require from IT security is keeping information secure and understanding where it is outsourced to.”

Brown was also critical of the £650 million fighting fund, as he said that this is split across five years and nothing has been seen of it yet, mainly as the achievement aims of the Cyber Security Strategy are the mission statement of GCHQ.

“Is this government getting security done on the cheap?” he asked. “In this time of austerity, is government doing parts that are required? It can only be a stepping stone. This needs to engage the whole of UK plc.”

The ‘Cyber Reserve' force was announced in a statement from to Francis Maude, minister for the Cabinet Office and Paymaster General, marking the first year of the Cyber Security Strategy. The concept is to draw on the wider talent and skills of the nation in the cyber field.



o2 Ireland admits to loss of unencrypted back-up tape over a year ago

The Irish arm of mobile provider o2 has admitted that back-up tape was lost over a year ago.

In a statement, o2 Ireland said that its IT support partner IBM informed it of the loss of a tape used for routine daily IT back-up work. The tape remains unaccounted for, although o2 admitted that it was ‘in a format which is not accessible to someone trying to access it and requires specialist technology to extract any readable information from it', although it said that it was not encrypted.

It said: “The data on the tape, which relates back to 2011, contained a snapshot of information held on some of O2's own internal company drives at the time. We are not aware of any incident since the tape was misplaced whereby data that might have been on the tape was accessed or used.

“Once o2 was made aware of the incident it immediately launched an investigation to find out what kind of information was on the tape and to verify whether it could be found. o2 also notified the Data Protection Commissioner of the incident and has been working with them closely since.”

It said that it is possible that the tape could contain some personal data but it was more likely that it contained information about o2's normal business affairs and company information. “While the tape has been lost, it is likely still in an o2 building. As a result we believe there is a low risk to customer data privacy,” it said.

“o2 takes data security very seriously and while it believes there is a low risk to customer data privacy as a result of this incident, as a precaution, we are taking this opportunity to inform customers, existing and former.

“o2 sincerely apologises to its customers that the incident in question occurred. We also want to reassure customers that stringent measures have since been taken to ensure such an incident does not happen again.”

Mark Bower, data protection expert and VP at Voltage Security said: “This is exactly why the telecom companies in the EU are required to do more to protect customer data under a number of rules that were introduced a long time ago. In 2002, EU Directive 2002/58/EC â€" a revision of a 1997 privacy directive required telecom carriers to take measures ‘ensuring personal data is accessed by authorised persons only; protecting personal data from being destroyed, lost or accidentally altered; and ensuring the implementation of a security policy on the processing of personal data'. This was strengthened again in 2008.

“Clearly the conclusion that has to be drawn here is that since the whereabouts of the tape are unknown and the data wasn't protected, the customer data is at risk and there is a question of how telecom companies stand up against the legislation. The risk to customer data here could have been easily mitigated with data-centric security by protecting the data at the source so that it stays protected over its lifecycle â€" including to backup tapes.

“If the reaction to this is to implement tape encryption, that will be exactly the wrong knee-jerk response. Data goes everywhere today â€" and any time it's present in a low-trust environment such as cloud, backup, storage, or being transmitted or in use, data protection is critical -- not optional.”

Brian Honan, consultant at BH Consulting, said that the release raised a number issues, such as Why does o2 not know what was on the tape, why was the tape not encrypted and why did it take IBM so long to notify o2 about the loss of the tape?

He said: “Most backup systems have a logfile or record of what data was backed up. It seems strange to me that there is no record as to what data was, and was not, backed up onto the tape.

“Why was the tape not encrypted? Copying data onto a tape means at some stage that data can be read back from the tape. This means anyone with the same type of tape drive and software can restore the data. If that data is not encrypted then anyone with that equipment can restore and read the data. If the data is encrypted then even restoring it from tape makes it unaccessible to those without the proper access.”



SMS spoofing flaw closed as technology marks 20th anniversary

Users who send and receive Twitter messages via text message from their mobile phone are vulnerable to a weakness that could allow anyone to post a tweet to their account.

According to developer and security researcher Jonathan Rudenberg, who discovered the flaw, all the attacker needs to know is the target's phone number and they can spoof the originating address of the text message or the SMS itself.

He said: “Like email, the originating address of [an] SMS cannot be trusted. Many SMS gateways allow the originating address of a message to be set to an arbitrary identifier, including someone else's number.

“The cleanest solution for providers is to use only an SMS short code to receive incoming messages. In most cases, messages to short codes do not leave the [mobile phone] carrier network and can only be sent by subscribers. This removes the ease of spoofing via SMS gateways.”

Rudenberg said that he had received confirmation from both Facebook and Twitter that the issue has been resolved. He suggested that services like Twitter implement challenge-response questions, which, for example, could require the sender to repeat back a ‘short alphanumeric string' to confirm that they are the one who sent the tweet.

This week marked the 20th anniversary of the first text message and according to the Guardian, more than 8 trillion messages were sent last year and around 15 million leave our mobile screens every minute.

Recent research from Cloudmark found that 64 per cent of UK mobile text message users aged 16 and older had received spam texts in the past year, but most are still unsure of how to take appropriate action. It found that 63 per cent of British texters would ignore an unsolicited text message, yet only 10 per cent would actually take the correct action by reporting spam to their carrier and a third (31 per cent) would text ‘STOP'.

Trevor Connell, managing director of Siemens Enterprise Communications, said: “It is interesting that in the middle of all the advances in telecommunication technologies we have seen in the last 40 years, it has been the one that offers the least personalised service that has caught on so fast. Indeed, having the senders face appear next to the SMS message received has only been advanced in the last six years.”



Stonesoft to launch military-ready resilient firewall appliance

Stonesoft has announced the invention of a next generation firewall and VPN appliance.

Named the MIL-320, it said that the appliance is certified to military level for high performance in extreme temperatures, wet, mud and dus as well as utilising Stonesoft's data stream-based normalisation and deep inspection across all protocols and network layers.

According to the company, MIL-320 offers protection against sophisticated attacks and includes a WLAN and ad-hoc mesh network support for resilient, strongly encrypted local communication links and data transfer between deployed units. The solution provides also unique features like the capability for using custom encryption ciphers from a connected USB stick.

Also, like all Stonesoft solutions, the MIL-320 has technology to detect and protect against Advanced Evasion Techniques (AETs). The Stonesoft MIL-320 will be available for shipping from the beginning of 2013.

Ash Patel, regional director of Stonesoft UK, said: “With the MIL-320, network security can now be made available at locations and conditions where this has never been possible before. The appliance differs from traditional enterprise next generation firewall/IPS solutions through its water and dust shielding capabilities, resistance to bumps, shocks and vibrations and wide operating temperature range.

“Moreover, the solution beats traditional military solutions with its unseen combination of the latest network security, centralised management and better performance. It offers also much better environmental protection, security and strong inspection capabilities and performance than traditional industrial solutions. In short, the MIL-320 is first of its kind and unbeatable, literally.”



CIOs do not test cloud security solutions

Fewer than half of CIOs have tested security systems and procedures provided by cloud vendors.

According to a survey of 100 CIOs, 45 per cent said that they test the security systems and procedures provided by cloud vendors, yet that security issues remain the biggest concern for migration of systems to the cloud, with 84 per cent of respondents either concerned or very concerned about cyber security.

Of those surveyed, 54 per cent said that they were ‘somewhat concerned' while 30 per cent said that they were ‘very concerned'. However 11 per cent of CIOs said that they not taking any proactive action to address cyber security, while 12 per cent said that they were ‘not concerned' about cyber security.

Ryan Rubin, UK director of risk consultancy Protiviti, who conducted the survey with parent Robert Half, said: “These statistics indicate that either there is an inherent trust in cloud service providers; that they have good security governance in place or there is a lack of visibility of potential risks associated with using them.

“However, there is also a potential risk that CIOs are not always involved in the overall business making decision to procure cloud services â€" limiting their ability to carry out effective due diligence before these services are adopted.

“Since an increasingly higher percentage of IT security breaches involve third parties, gaining assurance from cloud providers is critical to managing information security risk. Whilst companies may migrate IT towards cloud providers in an attempt to reduce costs, they cannot outsource their information security risks.

“Unless adequately managed, the cost of security breaches - either regulatory and or legal - may outweigh the perceived benefits of moving into the cloud.”



Citrix announce intention to acquire Zenprise

Citrix has entered into a definitive agreement to acquire mobile device management (MDM) vendor Zenprise.

Expected to close in early 2013, Citrix said that it intends to integrate the MDM with its CloudGateway and Me@Work solutions for management of mobile apps and data. Terms of the acquisition were not disclosed.

According to Citrix, no current solutions offer a combination of application management as well as device management to allow 'bring your own' policies, leading users to deploy a mix of products from multiple vendors, often resulting in complicated deployments and a poor user experience.

It said that what is needed is a single, integrated enterprise mobility product line that combines all the power of MDM, with the ability to manage mobile apps and data across any mix of user devices. Combined with its CloudGateway and Me@Work solutions, Citrix said that when MDM is combined with these mobile app and data solutions, Citrix will become the first company to offer a mobile device, app and data management in a single, integrated enterprise mobility product line.

Sumit Dhawan, vice president and general manager of mobile solutions at Citrix, said: “Zenprise was a clear choice for Citrix, with its leading MDM product, an experienced team, a history of innovation, and a footprint on more than one million devices. With a complete Citrix enterprise mobility solution, customers have all the necessary pieces to manage and secure mobile apps, content and devices.”

Dr. Waheed Qureshi, chairman and CTO of Zenprise, said: “This combination not only brings together incredibly complementary products, but a shared culture of innovation. Citrix and Zenprise have each proven their ability to deliver leading offerings in a fast-paced market. Together, we will accelerate our ability to anticipate and exceed our customers' expectations for world-class enterprise mobility solutions.”



SC IT-GRC conference: Changes to Data Protection Directive could affect independence of ICO

Changes to the Data Protection Directive could affect the independence of the Information Commissioners Office (ICO).

Speaking at the SC Magazine IT Governance, Risk and Compliance (GRC) conference in London, Simon Rice, principal policy adviser (technology) at the ICO, said that while some of the changes to the directive were welcome, some were not and the removal of the notification fee removes the ICO's prime source of funding.

Rice said that changes, announced in January of this year, were welcome to the 14-year old data protection directive as it was 'more prescriptive of what businesses need to put in place' and it welcomed improved subject rights and better rights and clear responsibilities on transparency and data privacy.

However what he said was less welcome was a divide between directive and regulation, with the former for the public sector and the latter for the private sector, and that it was difficult to regulate both of those as some businesses sit between the two so regulation could 'get very difficult and confusing'. Rice said: “It would be better if there was one piece of legislation across the board and regulated effectively. This is very over-prescriptive and the 'thou shalt do this' nature doesn't fit.”

He was also critical of the removal of the notification fee that businesses pay the ICO and which funds the data commissioner, and that the change 'increases the workload but removes funding'.

“Currently there is no minimum requirement for breach. There is the proposed removal of the requirement to notify, but we are funded by fee and it allows us to remain independent from the government and public and private sector, so will we be issuing penalties to fund ourselves? We are tackling this as it allows us to remain independent,” he said.



SC IT-GRC Conference: All hands on a clear deck will help GRC journey

A governance, risk and compliance (GRC) strategy needs to begin with a piece of paper and conclude with business interest.

Speaking in a panel debate at the SC Magazine IT Governance, Risk and Compliance (GRC) conference in London, Peter Gibbons, head of IM security at Network Rail said that GRC 'starts on a piece of paper' as it was important to know what it looks like on paper before thinking about what tools are needed. “Get your design work done before you select the processes. GRC is the glue that sticks it together,” he said.

Jitender Arora, senior programme manager for security and risk at GE Capital EMEA, said that a GRC framework should be much broader than overall IT and in a programme, you should create boundaries to determine your strategy as the journey starts with risk appetite and regular awareness around that to make sure boundaries are being set.

He said: “IT security means a generic framework, policy or change management and they are all there because of risk. It is very imporant to be structured on risk appetite; if you decide that a password will be eight to ten characters, think about the amount of money you have to spend on that, so how do you make that decision?

“What is changing? Look at more attacks or evidence of internal frauds because of hacking, and start your journey with a risk appetite. Best practice is generic for world but maybe not best for you and what is relevant for organisation as it comes with a cost. There is nothing like 100 per cent security, but business are pressured with not enough money, so you should define the right set of controls and policies, and decide what level of risk you want to take.”

Alan Rodger, senior analyst at Ovum, said that an understanding of the definition of risk appetite will require a partnership between IT functions and the business. “Every business is different, but different organisations have different risks and who is managing the risk â€" it could be the CFO, the chief risk officer or the compliance officer, it depends on the industry so who defines who at table and move from there,” he said.

Gibbons said: “The challenge with governance is knowing what you have to do is proving to be a challenge and there is no clear guidance on how they link into business objectives. Trying to get everyone round the table to work towards a clear goal is a challenge, so using a clear language is a good first step.”

Suzanne Rodway, group head of privacy at RBS, agreed that a 'clarity of terminology and determination of what you want to achieve is a good step'.

Rodger said: “GRC isn't a security practise, but it should work with security practises. The best way to implement security management is through GRC as you have an idea of what you want a solution to do.”



Egress Switch selected by law firms

Egress has been selected by law firms Ward Hadaway and Morgan Cole.

Selecting the firms Switch technology to provide secure encryption standards and complete confidentiality when sharing client information with clients and third party partners.

Greg Taylor, head of IT at Ward Hadaway, said: “The legal market is now so competitive it is very hard to prove that the service provided by one fee earner is better than the service provided by another.

“At Ward Hadaway, we wanted to differentiate ourselves from the rest of the market by offering our clients a level of information security that meant that we could guarantee 100 per cent client confidentiality at all times. What is more we wanted a system that was easy to use and actually bucked the trend by improving efficiency.

“During the selection process it soon become clear that ease of use was as important as the security of the system itself. Egress Switch exceeded our expectations and met all of our business requirements as sending an encrypted email was as simple as sending a normal email. Better still, their unique ‘follow the data' approach to security also satisfied our desire to provide end-to-end data assurance.”

Also, Morgan Cole LLP selected Egress Switch to secure confidential client information and meet regulatory compliance. Paul Dryden, IT operations manager at Morgan Cole, said: “The need to electronically share confidential and sensitive information externally with our clients and partners has increased substantially in the last few years.

“Egress Switch allows our lawyers to share information securely regardless of the transfer mechanism. Any information sent by email, copied to removable media, or transferred via FTP/HTTP servers can be shared with confidence, offering auditing and real-time control â€" even after the data has been sent.”

Tony Pepper, CEO of Egress Software Technologies, said: “With the recent legislation from the Solicitors Regulation Authority (SRA) which mandates all law firms to nominate or employ a compliance officer, we are seeing more and more client's position information security as a key priority. Without clear security policies and investment in the right technology firms risk substantial regulatory fines and reputational damage.”



Microsoft to make it 83 bulletins for 2012 as it fixes five critical flaws next Patch Tuesday

Microsoft is to release seven bulletins on its final patch Tuesday of 2012, fixing five critical issues.

According to an advance notification, the critical bulletins will address vulnerabilities in Windows, Word, Windows Server and Internet Explorer. The other two patches are rated as important and will address issues in Windows.

Ziv Mador, director of security research at Trustwave SpiderLabs, said: “Six out of the seven will result in Remote Code Execution, which is about as bad as bad gets. The last one deals with something Microsoft is calling a ‘Security Feature Bypass' and is only in Windows Server 2008 and 2012. Despite being only rated as important that one is looking very interesting this month.

“Bulletin one looks to be extremely nasty, allowing Remote Code Execution in Internet Explorer 6, 7, 8, 9 and 10, including the version of Internet Explorer on that shiny new Microsoft Surface running Windows RT. This makes it the second patch in as many months for Microsoft's new gadget.”

Andrew Storms, director of security operations at nCircle, said: “Of course, there's still the possibility of some holiday zero-day mayhem that could require an out-of-band patch. Let's hope that doesn't happen.

“Just in time for online holiday shopping to come to full frenzy, every supported version of Internet Explorer will need a critical patch. It's almost certain this will be the number one priority for IT teams everywhere next week.

“There's a worrisome Exchange server bug marked critical.  IT teams will need to spend the time reviewing this bulletin next Tuesday to better understand the risk and decide if they need to patch it immediately. This could be a tricky decision for businesses focused on year end revenue because patching the bug may cause some downtime as the year comes to a close. Each individual business will have to decide if the risk of downtime is greater than the risk of being vulnerable.”

Paul Henry, security and forensic analyst, Lumension, said: “Fortunately, none are currently under active attack, so that should set IT's mind at ease as they begin to apply this set of patches.

“Microsoft had 100 bulletins for the calendar year, of which 34 were critical, 63 important and three moderate. In 2012, they reduced the number of bulletins by close to 20 per cent, coming in at 83 bulletins for the year, of which 35 were critical, 46 important and two moderate. It's great to see that Microsoft's Secure Coding Initiative is paying off, reducing the number of vulnerabilities in their software, resulting in an easier time for IT at Patch Tuesday time.”



Student charged with PayPal hacks

A student who was a member of Anonymous has appeared in court and charged with a series of attacks on PayPal.

Christopher Weatherhead, 22, who used the name Nerdo, was described as a leading player in distributed denial-of-service (DDoS) attacks, according to the Guardian. He was convicted on one count of conspiracy to impair the operation of computers, contrary to the Criminal Law Act 1977.

The charges allege that Weatherhead was one of a number of hacktivists who spent ten days targeting Paypal, causing losses of £3.5m as part of Anonymous' ‘Operation Payback'.

Judge Peter Testar at Southwark Crown Court, warned him he could face jail when sentenced at a later date with his three co-accused, who pleaded guilty at an earlier hearing. Weatherhead was freed on bail until sentencing in January on a date yet to be set.

Weatherhead has been banned from using internet chat relays or posting online under the pseudonym Nerdo or any other name but his own.



UK businesses admit to being unsure about the cost of a data breach

The price of a data breach is unknown to around 70 per cent of UK businesses.

According to research by CipherCloud, when asked about the estimated cost of a security breach of sensitive information at their organisation (in terms of compliance fines, contractual breaches and reputation losses), 68 per cent said that they did not know.

The survey of 300 business decision makers found that 15 per cent of respondents believed the amount to be between £50 and £100, while two per cent felt it could be over £300.

Richard Olver, regional director of EMEA at CipherCloud, said: “Our research reveals some interesting insights into the minds of senior business professionals, with a lack of understanding of the true cost of the breach of their sensitive information. On the other hand, it's clear that organisations are very concerned about the risk of their information being breached.”

Commenting, Christian Toon, head of information risk at Iron Mountain Europe, said: “These figures are concerning yet sadly not surprising. It is time for businesses to take responsibility for the sensitive information they hold.

“The drive for this culture of responsibility and secure information management needs to come from the very top of the business, and should include sufficient training, support and regular reviews. It is time for all organisations to make Corporate Information Responsibility part of their operational DNA in the same way that many have done with Corporate Social Responsibility.”



The danger of uploading data to an untested cloud revealed

There is a distinct danger in ‘simply uploading data and assuming it remains secure' and greater care needs to be taken to ensure safety of data.

According to Joseph Souren, vice president and general manager EMEA at Wave Systems, enterprises need to harness cloud infrastructure and ensure that the information they store is secure and have control over it.

He said: “Cloud storage has become increasingly prevalent in today's market, with enterprises quick to embrace the benefits. Yet, there have always been dangers in simply uploading data and assuming it remains secure. The latest figures are of particular concern as the majority of CIOs remain unaware that cloud storage services are not inherently secure, and that further measures must be taken to maintain data integrity.

“Enterprises need to harness cloud infrastructure but also ensure the information that they put into the cloud is secure. They need to regain the control over their data and choose who can see it.”

Souren referenced research from Protiviti, whose survey of 100 CIOs found that 55 per cent do not test the security systems and procedures provided by cloud vendors, while 84 per cent said that they were either concerned or very concerned about cyber security.

Research from CipherCloud, also released last week, found that 44 per cent of 300 business decision makers were concerned about security and risk of data breaches regarding their organisations' sensitive information being in the cloud. A third (33 per cent) said that a loss of control over data was the biggest concern, while ‘data residing under foreign jurisdiction control' was highlighted by 23 per cent.



Symantec launches Endpoint Protection 12.1, VDI support

Symantec has launched the latest version of its endpoint protection suite, announcing integration with VMware's vShield Endpoint and support for the latest Apple and Microsoft operating systems.

The Mountain View, Calif.-based security giant said SEP 12 includes an improved behavioral engine it calls SONAR, which can monitor up to 1,400 endpoint behaviors without impacting performance.

Integration with VMware vShield Endpoint offloads antivirus and antimalware agent processing to a dedicated secure virtual appliance for deployment and monitoring in VMware environments. The goal of VMware vShield Endpoint is to prevent AV storms, when antivirus scans multiple guest virtual machines on a single physical host.  Symantec joins McAfee, Kaspersky Lab, Trend Micro and BitDefender in supporting vShield Endpoint.

Symantec said its improved SONAR behavioral engine helps mitigate the risk of zero-day threats. Previously, SONAR monitored approximately 400 behaviors, looking for out of the ordinary activity. The technology can now support up to 1,400 behaviors.  

Industry analysts say all the antivirus vendors have been adding and updating reputation features in an attempt to bolster the effectiveness of the technology. SONAR was acquired by Symantec in 2005 and initially integrated into its Norton consumer products. The company has tweaked its performance and reduced the power consumption it uses to monitor executables on endpoint machines and detect whether an files are acting suspicious. Other antivirus vendors, including McAfee, Trend Micro, Kaspersky Lab provide similar behavioral or reputation-based monitoring capabilities.  

The latest version also added platform support for Windows 8, Windows Server 2012 and OS X Mountain Lion.

Symantec Endpoint Protection customers that are current with their maintenance agreements are entitled to this upgrade free of charge.  

A number of studies testing antivirus products have found the protection failing to adequately detect new malware. Experts say signature-based antivirus can't keep up with the new malware being produced on a daily basis. A recent study conducted by security firm Imperva in conjunction with researchers at Tel Aviv University, gave high marks to Symantec and McAfee and said ESET, Avast, Kaspersky Lab and Trend Micro also performed well. Most experts point out that no security technology is a panacea. Antivirus provides an additional layer of protection at the endpoint that, at a minimum, can detect known threats fairly well.

~Robert Westervelt   




Software development maturity driving down ZDI flaw submissions

A rapidly rising secondary market for zero-day vulnerabilities and improved software coding practices are combining to reduce the number of submissions to the HP TippingPoint Zero-Day Initiative, according to a vulnerability researcher who is overseeing the bug bounty program.    



Experts develop protections for product piracy, intellectual property theft

A researcher at the University of Massachusetts Amherst has been awarded for being part of a team that is developing a way to make it easier to detect product piracy and intellectual property theft of thumb drives, keyfobs and just about any embedded device.

When someone gets sick because [an] insulin pump was spoofed, there will be some major losses there.

Elecia White,
founder, Logical Elegance

A hidden signal in the power supply of embedded devices can validate the authenticity of devices, he said, and it could reduce costs currently associated with reverse engineering or overriding software protections.

Georg T. Becker won the Best Cybersecurity Solution award at the second annual Advanced Cyber Security Center conference with his research, titled Side-channel based watermarks for embedded devices (.pdf). Becker proposed the use of small circuitry into the power consumption of the device that produces a unique signature. The watermark would be indistinguishable from noise, but using side-channel analysis, the hardware maker could reliably detect the signature, Becker said.

Product piracy and intellectual property theft have been rising concerns of software makers and manufacturers of embedded devices. Some of the focus on embedded device security is in preventing device attacks, in which an attacker targets the device to alter its processes or leverages it to gain access to more critical systems. But a more immediate concern is gaining control of the costs associated with device piracy and intellectual property theft. McAfee and its parent Intel have made the area a priority, trying to develop ways to create chip-based or hardware-based security.

The Verizon Data Breach Investigation report noted intellectual property theft as a serious issue. The problem is being fueled by targeted, persistent attacks believed to be originating from China, and designed to gain access to corporate systems and maintain a presence on them. The malware remains stealthy, stealing intellectual property and other sensitive data over long periods of time.

Becker also believes the same technique can be applied to software for embedded devices by adding assembler instructions to the software program that can produce a unique signal in the power consumption of the device. The technique would eliminate the need for an evaluator to overcome the memory read protection to verify the authenticity of the device, Becker said.

The idea was first introduced by Becker and two other researchers in 2010 at the IEEE International Symposium on Hardware-Oriented Security and Trust. The software protection was presented in August at the IEEE Transactions on Information Forensics & Security.

Embedded systems experts said the embedded systems research and other techniques being developed hold promise.

The technology would not fit with all markets, but it may meet the needs of some areas were antipiracy and security is a priority, said embedded systems expert Michael Barr, CTO of his Baltimore-based consultancy, Barr Group. Cost is a critical factor in the mass producing of some embedded devices, he said.  

"Now that the consumable device is smart, it still has to be low cost so we get into some complicated issues there," Barr said. "Most embedded systems are insecure either because nobody has taken the time or money to secure them or because each device has a unique type of attacker with different motivations."

Card readers, smartcards and ATM machines are hardened with encryption and other technologies to prevent spoofing and tampering, said Elecia White, founder of Logical Elegance, an embedded systems consulting company based in San Jose, Calif. Manufacturers and software makers are increasingly concerned about security to protect both their own interests and the sensitive data being collected and potentially transmitted by the devices, White said.

"Vendors have to be pretty strict about encryption and security so they don't get a bad reputation because the cheaper pirated products not only cut in on the bottom line, they typically perform poorly," White said.

Intellectual property theft and piracy is a growing problem with health care clinical consumables, such as thermometers, needle guides and ultrasound transducers, which are thrown away after use. A watermark or other technique that can prevent piracy and cut the costs associated with identifying pirated items would likely be highly considered by manufacturers of those items, White said.

"When someone gets sick because [an] insulin pump was spoofed, there will be some major losses there," White said. "People working in [the] medical device field are very aware of security and how to make it as bulletproof as possible."

Other manufacturers and software makers of items such as Roku, the Sony Wii and other consumer devices have the processing power to put strong encryption in place and take other steps to protect the IP, White said.

"Children's toys and other consumer products have such a fine margin of profit that building something into the system isn't always viable," White said. "It's sometimes better to get it into the market, get it done, and build a new one, instead of spending more time on something that can't be copied."




Cutwail botnet spam campaign tied to Zeus banking Trojan

A new spam campaign being driven by the largest spam botnet has been tied to the notorious Zeus Trojan and is believed to be spreading quickly, stealing account credential sand credit card numbers.

Researchers at Dell SecureWorks Counter Threat Unit have discovered the spam messages originating from the Cutwail botnet, attempting to trick victims into downloading the Gameover Zeus banking Trojan.

The spam message is made to look like it comes from many of the top U.S. banks. It reads: “You have received a new encrypted message or a secure message from [XYZ] Bank." The spam message encourages recipients to download an attachment and register for a new system designed to protect privacy and personal information. Instead the attachment contains the Pony downloader, which installs the banking malware.

"The Cutwail botnet only needs to employ approximately 10,000 bots per spam campaign to send out hundreds of millions of malicious spam messages to computer users all over the world," said Elizabeth W. Clarke, a Dell SecureWorks spokesperson.

So far researchers have detected several variants of the spam messages, all encouraging victims to open a file attachment to read a message, listen to a voicemail or register for a new privacy system. Dell SecureWorks said employees should be trained to never click on a link or an attachment in an email, even if they know the sender. "Always verify that the sender sent the email," Clarke said. "Additionally, update your IPS/IDS countermeasures and firewalls to detect the latest threats." 

The Zeus Trojan has been a major headache for banks and financial firms, with different variants infecting customer systems attempting to dupe individuals into giving up their account credentials. New variants of Zeus are frequently detected by researchers. The issue has become such a problem that Microsoft took legal action to disrupt some Zeus botnets. But despite a few victories, cybercriminals continue to recover their operations.

Dell SecureWorks said the Gameover Zeus botnet is a peer-to-peer botnet and one of the largest in existence with more than 678,000 infections. Unlike other Zeus botnets with a centralized command and control server, peer-to-peer botnets are difficult for security teams and law enforcement to take down. It has been a pest at many enterprises, detected on corporate systems and systems at universities, defense contractors and government agencies.

Those behind the Gameover Zeus botnet are believed to be the most aggressive, infecting machines and recruiting money mules to drain bank accounts in the United States and Europe. The gang uses a number of tools including the Black Hole Exploit toolkit, an automated toolkit believed to be the source of many financially motivated attacks and DirtJumper, used to DDoS the financial institutions when looting a victim's bank account.




Study finds firms lagging in health care privacy, data security protections

Health care firms are lagging far behind other industries with information security initiatives, according to a new study, which found many organizations suffering multiple data breaches, exposing confidential patient information, and ultimately failing to implement many basic security measures.



Twelve common software security activities to lift your program

The Building Security in Maturity Model is chock full of hardcore software security goodness as practiced by some of the most successful companies on earth. But there are too many activities to take on all at once without overwhelming the ship. Which BSIMM activities are the most important? And more critically, if you are just getting started, which BSIMM activities should you adopt first?

Gary McgrawIf you are getting started with the Building Security in Maturity Model (BSIMM), you should consider the most common twelve activities, all of which are straightforward and easy to adopt.

There are a few nits in the ointment with this oversimplified twelve-step plan. First of all, the BSIMM has 111 activities, so we're severely limiting our view here if we only focus on twelve. Second, just because every other firm in the world is carrying out the twelve activities we describe here, that doesn't mean they will work for your firm. But that's OK. We're going to forge ahead anyway.

Of the 111 activities observed in BSIMM4, there are twelve activities that at least 32 of the 51 firms we studied carry out (63%), one identified in each practice. Though we can't directly conclude that these twelve activities are necessary for all software security initiatives, we can say with confidence that these activities are commonly found in highly successful programs. This suggests that if you are working on an initiative of your own, you should consider these twelve activities particularly carefully (not to mention the other 99). In addition, if you are planning a new software security initiative, the following activities are probably a good place to get started.

First some quick review. The BSIMM is the result of a multi-year study of real-world software security initiatives. The latest version, BSIMM4, was built directly out of data observed in fifty-one software security initiatives from firms, including: Adobe, Aon, Bank of America, Box, Capital One, The Depository Trust & Clearing Corporation (DTCC), EMC, F-Secure, Fannie Mae, Fidelity, Google, Intel, Intuit, JPMorgan Chase & Co., Mashery, McKesson, Microsoft, Nokia, Nokia Siemens Networks, Qualcomm, Rackspace, Salesforce, Sallie Mae, SAP, Scripps Networks Interactive, Sony Mobile, Standard Life, SWIFT, Symantec, Telecom Italia, Thomson Reuters, Visa, VMware, Wells Fargo and Zynga.

The BSIMM is a measuring stick for software security. The best way to use the BSIMM is to compare and contrast your own initiative with the data presented in the BSIMM. You can then identify goals and objectives of your own and look to the BSIMM to determine which further activities make sense for you. The BSIMM data shows that high maturity initiatives are well rounded, carrying out numerous activities in all twelve of the practices described by the model. The model also describes how mature software security initiatives evolve, change and improve over time.

A descriptive view of 111 software security activities

The BSIMM model was derived directly from data gathered through first-hand observation. Throughout the course of making nearly 100 distinct measurements (some firms measured twice, some firms include subsidiary measurements that roll into a single firm score), we have identified 111 activities. The most direct way to report this data is to show the number of times each activity was observed among the data set of 51 firms.

BSIMM

The table above shows the number of times each of the 111 activities was observed in the BSIMM4 data. An expanded version of this chart can be found in the BSIMM document itself (available for free under the Creative Commons license). The BSIMM document also meticulously describes each of the 111 activities. Here, we're sticking with the twelve most popular activities.

As you can see in the table, twelve of the activities are highlighted. Each highlighted activity is the most commonly observed in its practice, having been observed in at least 32 of 51 firms. That means each of the twelve activities is very popular and in common use in real-world software security initiatives.

Know that although the twelve activities we're covering are common, they may not make sense for your firm (for cultural, budgetary, or other reasons). However, the BSIMM data describes what is actually happening in the world today when it comes to software security, and thus provides very useful guidance from seasoned software security professionals.

Twelve core BSIMM activities

Without further ado, here are the twelve most common BSIMM activities. (We preserved the somewhat obscure BSIMM labels so you can check them out later in context of the entire BSIMM model.)

  1. SM1.4 Identify gate locations, gather necessary artifacts;
  2. CP1.2 Identify PII obligations;
  3. T1.1 Provide awareness training;
  4. AM1.5 Gather attack intelligence;
  5. SFD1.1 Build and publish security features;
  6. SR1.1 Create security standards;
  7. AA1.1 Perform security feature review;
  8. CR1.4 Use automated tools along with manual review;
  9. ST1.1 Ensure quality assurance (QA) supports edge/boundary value condition testing;
  10. PT1.1 Use external penetration testers to find problems;
  11. SE1.2 Ensure host and network security basics are in place; and
  12. CMVM1.2 Identify software defects found in operations monitoring and feed them back to development.

Detailed descriptions of each of the twelve activities, including real examples taken directly from the BSIMM data, can help bring these activities to life.

SM1.4 Identify gate locations, gather necessary artifacts: The software security process will involve release gates/checkpoints/milestones at one or more points in the software development lifecycle (SDLC) or, more likely, the SDLCs. The first two steps toward establishing release gates are: 1) to identify gate locations that are compatible with existing development practices, and 2) to begin gathering the input necessary for making a go/no-go decision. Importantly at this stage, the gates are not enforced. For example, the software security group (SSG) can collect security testing results for each project prior to release, but stop short of passing judgment on what constitutes sufficient testing or acceptable test results. The idea of identifying gates first and only enforcing them later is extremely helpful in moving development toward software security without major pain. Socialize the gates, and only turn them on once most projects already know how to succeed. This gradual approach serves to motivate good behavior without requiring it.

CP1.2 Identify PII obligations: The way software handles personally identifiable information (PII) could be explicitly regulated, but even if it is not, privacy is a hot topic. The SSG takes a lead role in identifying PII obligations stemming from regulation and customer expectations. It uses this information to promote best practices related to privacy. For example, if the organization processes credit card transactions, the SSG will identify the constraints that the PCI DSS places on the handling of cardholder data. Note that outsourcing to hosted environments (e.g., the cloud) does not relax a majority of PII obligations. Also note, firms that create software products that process PII (but don't necessarily handle PII directly) may provide privacy controls and guidance for their customers.

T1.1 Provide awareness training: The SSG provides awareness training in order to promote a culture of security throughout the organization. Training might be delivered by members of the SSG, by an outside firm, by the internal training organization, or through a computer-based training system. Course content is not necessarily tailored for a specific audience. For example, all programmers, quality assurance engineers and project managers could attend the same Introduction to Software Security course. This common activity can be enhanced with a tailored approach to an introductory course that addresses a firm's culture explicitly. Generic introductory courses covering basic IT security and high-level software security concepts do not generate satisfactory results. Likewise, providing awareness training only to developers and not to other roles is also insufficient.

AM1.5 Gather attack intelligence: The SSG stays ahead of the curve by learning about new types of attacks and vulnerabilities. The information comes from attending conferences and workshops, monitoring attacker forums, and reading relevant publications, mailing lists and blogs. Make Sun Tzu proud by knowing your enemy; engage with the security researchers who are likely to cause you trouble. In many cases, a subscription to a commercial service provides a reasonable way of gathering basic attack intelligence. Regardless of its orgin, attack information must be made actionable and useful for software builders and testers.

SFD1.1 Build and publish security features: Some problems are best solved only once. Rather than have each project team implement all of their own security features (authentication, role management, key management, audit/log, cryptography, protocols), the SSG provides proactive guidance by building and publishing security features for other groups to use. Project teams benefit from implementations that come pre-approved by the SSG, and the SSG benefits by not having to repeatedly track down the kinds of subtle errors that often creep into security features. The SSG can identify an implementation they like and promote it as the accepted solution.

SR1.1 Create security standards: Software security requires much more than security features, but security features are part of the job as well. The SSG meets the organization's demand for security guidance by creating standards that explain the accepted way to adhere to policy and carry out specific security-centric operations.?A standard might describe how to perform authentication using J2EE or how to determine the authenticity?of a software update. (See [SFD1.1 Build and publish security features] for one case where the SSG provides a reference implementation of a security standard.) Standards can be deployed in a variety of ways. In some cases, standards and guidelines can be automated in development environments (e.g., worked into an integrated development environment). In other cases, guidelines can be explicitly linked to code examples to make them more actionable and relevant.

AA1.1 Perform security feature review: To get started with architecture analysis, center the analysis process on?a review of security features. Security-aware reviewers first identify the security features in an application (authentication, access control, use of cryptography, etc.) then study the design looking for problems that would cause these features to fail at their purpose or otherwise prove insufficient. For example, a system that was subject to privilege escalation attacks because of broken access control or a system that stored unsalted password hashes would both be identified in this kind of review. At higher levels of maturity, this activity is eclipsed by a more thorough approach to architecture analysis not centered on features. In some cases, use of the firm's secure-by-design components can streamline this process.

CR1.4 Use automated tools along with manual review: Incorporate static analysis into the code review process in order to make code review more efficient and more consistent. The automation does not replace human judgment, but it does bring definition to the review process and security expertise to reviewers who are not security experts. A firm may use an external service vendor as part of a formal code review process for software security. This service should be explicitly connected to a larger software security development lifecycle (SSDL) applied during software development, and not just "check the security box" on the path to deployment.

ST1.1 Ensure QA supports edge/boundary value condition testing: The QA team goes beyond functional testing to perform basic adversarial tests. They probe simple edge cases and boundary conditions. No attacker skills required. When QA understands the value of pushing past standard functional testing using acceptable input, they begin to move slowly toward "thinking like a bad guy." A discussion of boundary value testing leads naturally to the notion of an attacker probing the edges on purpose. What happens when you enter the wrong password over and over?

PT1.1 Use external penetration testers to find problems: Many organizations are not willing to address software security until there is unmistakable evidence that the organization is not somehow magically immune to the problem. If security has not been a priority, external penetration testers demonstrate that the organization's code needs help. Penetration testers could be brought in to break a high-profile application in order to make the point. Over time, the focus of penetration testing moves from, "I told you our stuff was broken" to a smoke test and sanity check done before shipping. External penetration testers bring a new set of eyes to the problem.

SE1.2 Ensure host and network security basics are in place: The organization provides a solid foundation for software by ensuring host and network security basics are in place. It is common for operations security teams to be responsible for duties such as patching operating systems and maintaining firewalls. Doing software security before network security is like putting on your pants before putting on your underwear.

CMVM1.2 Identify software defects found in operations monitoring and feed them back to development: Defects identified through operations monitoring are fed back to development and used to change developer behavior. The contents of production logs can be revealing (or can reveal the need for improved logging). In some cases, providing a way to enter incident triage data into an existing bug tracking system (many times making use of a special security flag) seems to work. The idea is to close the information loop and make sure security problems get fixed. In the best of cases, processes in the SSDL can be improved based on operational data.

Don't forget that we have covered only twelve of the 111 activities described in the BSIMM! Also note that as "level one" activities, the twelve activities described here are particularly straightforward and simple. The BSIMM also includes "level two" activities (more difficult than level one and require more coordination) and "level three" (rocket science).

Two new (bonus) software security activities

As an observation-based descriptive model, the BSIMM changes over time. To give you a concrete idea of what this means and to give you a taste of some killer hard activities, we describe two brand-new, recently identified activities. These two are both "rocket science," level-three activities.

Our criteria for adding an activity to the BSIMM is as follows. If we observe a candidate activity not yet in the model, we determine based on previously captured data and BSIMM mailing list queries how many firms probably carry out that activity. If the answer is multiple firms, we take a closer look at the proposed activity and figure out how it fits with the existing model. If the answer is only one firm, the candidate activity is tabled as too specialized. Furthermore, if the candidate activity is covered by the existing activities, or simply refines or bifurcates an existing activity, it is dropped.

Using the criteria above, the two activities added to the BSIMM4 model are:

CR 3.4 Automate malicious code detection: Automated code review is used to identify dangerous code written by malicious in-house developers or outsource providers. Examples of malicious code that could be targeted include: backdoors, logic bombs, time bombs, nefarious communication channels, obfuscated program logic and dynamic code injection. Although out-of-the-box automation might identify some generic malicious-looking constructs, custom rules for static analysis tools used to codify acceptable and unacceptable code patterns in the organization's codebase will quickly become a necessity. Manual code review for malicious code is a good start, but is insufficient to complete this activity.

CMVM 3.3 Simulate software crisis: The SSG simulates high-impact software security crises to ensure software incident response capabilities minimize damage. Simulations could test for the ability to identify and mitigate specific threats or, in other cases, could begin with the assumption that a critical system or service is already compromised and evaluate the organization's ability to respond. When simulations model successful attacks, an important question to consider is the time period required to clean things up. Regardless, simulations must focus on security-relevant software failure and not natural disasters or other types of emergency response drills. If the data center is burning to the ground, the SSG won't be among the first responders.

Putting the BSIMM to work for your firm

We have only scratched the surface of the BSIMM in this article with a quick overview of fourteen of the 111 BSIMM activities. Download a copy today and see which activities make the most sense for your firm. Our bet is you can start with the twelve most common activities and move on from there.

The BSIMM project continues to grow. If you are interested in joining the project, please contact the authors.




Patch Tuesday: Five critical bulletins, Exchange Server fix expected

Microsoft will address 11 vulnerabilities this month, fixing flaws in Internet Explorer, Microsoft Office and Microsoft Exchange Server.

Seven Microsoft security bulletins, five critical and two important, will be released in the December 2012 Patch Tuesday update, according to the Patch Tuesday Advance notification issued by the software giant. The updates impact Microsoft Windows, Word, Windows Server and Internet Explorer. All currently supported operating systems are affected, the software giant said.

Bulletins 1 through 5 fix critical remote code execution vulnerabilities. Bulletin 1 requires a restart and affects Internet Explorer (IE) 9 and 10. IE 6, 7 and 8 will also be updated to address this issue.

"This flaw exists in IE 6, 7 and 8, but it's not exploitable in those versions," said Marcus Carey, a security researcher at Boston-based security vendor Rapid7 Inc.

Bulletins 2 and 5 require a restart and affect different versions of Windows XP Service Packs, Windows Server 2003, Windows Vista Service Packs, Windows Server 2008 and Windows 7. Bulletin 2 also addresses issues in Windows 8, Windows Server 2012 and Windows RT.

Bulletin 3 may require a restart to complete the patch. The vulnerabilities addressed in this bulletin affect services packs for Microsoft Word 2003, 2007 and 2010; Microsoft Word Viewer; Microsoft Office Compatibility Pack Service Packs 2 and 3; Word Automation Services; and Microsoft Office Web Apps 2010 Service Pack 1. Bulletin 4 may require a restart and affects Microsoft Exchange Server 2007 Service Pack 3 and Microsoft Exchange Server 2010 Service Packs 1 and 2.

The Exchange Server update should get the most attention, according to Wolfgang Kandek, CTO of Redwood City, Calif.-based Qualys Inc. If the update cannot be applied quickly, patching teams should implement a short term fix, Kandek said.  

The important bulletins, 6 and 7, require a restart. Bulletin 6 addresses vulnerabilities that could allow remote code execution. The affected software are Windows XP Service Packs, Windows Server 2003, Windows Vista Service Packs, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8 and Windows Server 2012 . Bulletin 7 addresses a security feature bypass vulnerability in Windows Server 2008 R2 and Windows Server 2012.

The bulletins will be released Tuesday, Dec. 11 at approximately 1 p.m. ET.

The November 2012 Patch Tuesday release focused on issues in Internet Explorer and Windows Kernel. November also included the first patches for Windows 8.




Converging audit and risk management programs a flawed approach, says expert

Why do many risk management programs fail? How do security and risk managers know they're providing value to their organization? For answers we've turned to Alex Hutton, currently a faculty member at IANS and the director of operations risk and governance at a major financial institution. Previously, Hutton was a principal in research and risk intelligence with Verizon Business. While there he was co-author of the Verizon Data Breach Investigation Report.

Audit doesn't necessarily care about reporting an aggregate picture of the organization's risk.

Alex Hutton,
faculty member, IANS

Hutton is also a co-founder of The Society of Information Risk Analysts, and an author at the New School of Information Security blog. Hutton also contributes, or has contributed in the past, to the Cloud Security Alliance (CSA), the Open Information Security Management Maturity Model (O-ISM3), the CIS metrics project and the Open Group Security Forum.

What do you see as one of the primary reasons why risk management programs fail?

Alex Hutton: The number one way to set yourself up for failure is to copy what your audit department does. You could say that audit is concerned with where failures can occur. Risk management should be concerned with the frequency and impact of failures. Audit's role is to be consultative and help the organization understand how they can implement or adjust controls -- risk management is an economic factor: It is consultative in terms of getting the most bang for your buck in mitigating risk.

So that's why I believe most risk management programs end up failing: They end up just being yet another audit function. They end up merely enforcing policy rather than being consultative about what risk management moves make sense.

You can see this lack of differentiation between audit and risk management affects the entire industry. There is this large movement to converge the two functions. This is especially so with the big four consulting companies. They're all talking about how they can come in and make you more efficient by converging audit and risk. When you hear [from executive leadership] that this convergence starts to make a lot of sense to them, it's because you are just probably duplicating audit and your program is fundamentally flawed.

How can those in risk management tell if they have become -- or have always been -- merely an extension of the audit department?

Hutton: There are inherent similarities. Both organizations need to understand controls. Both organizations are interested in impact. But audit doesn't necessarily concern itself with the threat community. Audit doesn't necessarily care about reporting an aggregate picture of the organization's risk. They say they are very interested in aggregate risk, but if you look at how people run audit programs, how the industry standards say what you should do, rarely do you get that level of reporting that a good functional risk management program will give you.

Look at the charts in the Verizon Data Breach Investigations Report; when you look at the population of threats and their actions, the assets that they are attacking, and the impacts in terms of security attributes, you are digging into language that is completely foreign to most audit departments.

If you want to know how your program is viewed internally, ask your internal business customers for a very straightforward discussion about the differences between your program and what audit is providing. The most frank of your intra-business customers will say, "We already did this for audit. We're already doing this and this." A very frank conversation with a member of the business that you can trust, where you can ask, "How much value am I providing you over what happens there when you are audited?" and if they say, "Not so much," That's a huge indicator that you are doing it wrong.

And I think the full convergence movement of risk and audit is just a recognition that this problem is endemic in risk management programs.

You are not a fan of risk catalogues, could you explain why?

Hutton: You want to transition from risk cataloging to exposure cataloging. What most organizations do is they build this giant register of bad things that can happen. The risk register becomes the worry list of all the possible things that could go bad. The problem with a risk register is that you never know quite when to stop.

I used to work for a company that was on the flight path at Dulles. What about a jet engine dropping on the data center? Certainly something you could put into a risk register, but certainly not something that, a) is a high probability event, and b) something that you're going to spend a "bajillion" dollars to reinforce your roof so that you can withstand a jet engine dropping on you.

Organizations end up going out and doing this big kabuki dance about all the problems that could go wrong. But what if you start moving from the risk register population of all the possible bad things to asking, "What's the impact?" For example, go talk to your Exchange Server admin and ask some probing questions: talk about an event where the Exchange Server is compromised; talk about the sort of cost exposure the organization would incur; talk about how to make sure that you don't incur the worst case scenario there in terms of the distribution of losses; talk about how you may reduce the size of that loss distribution.

By cataloging that type of impact of losses with your assets will make a whole lot of more difference in the value you provide to the organization. You also don't care if the Exchange Server is out because it was attacked, or if it was shot by a laser-beam from an ancient alien astronaut who has come back to Earth after seeing a Star Trek episode in deep outer space.

What about your internal intel functions? What does that tell you about the health of your risk management program?

Hutton: That is one very quick way to tell whether you're duplicating an audit function or you have a real risk management program. How's your intel function? If risk is really the collision of four sets of information -- threat, controls, asset and impact -- and if there's a change to any of that information, such as a new threat, new controls, or a lack of efficiency in certain controls because somebody left the organization or whatever, that's something you need to concern your program with. This could be new assets that you weren't aware of that don't go out according to security policy or are exposed, or impact perhaps a new regulatory impact. It can be anything that changes the status quo of your threats, controls, assets and impact.

If you don't have an intel function built into your risk management program then you are more like audit function than you are a modern risk management program. Think about it: How many current risk management standards really spend time describing what comprises a good intel function? How many tell you how to source intelligence? How to deal with the potential impact of that intelligence? A typical risk scenario to worry about, based on new intelligence, would be when new malware strikes OS X, and you have a population of 1,000 Macs. Now what?

Anything else you'd like to add about security and risk and how to tell if they're providing more value to the business than being an extension, or duplication, of the audit department?

Hutton: Yes. I love this exercise. It's about changing your perspective. I consider the point when you can remove the word "risk" from your vocabulary for a month you've actually achieved the Zen of good risk management.

Let's use that Exchange Server example. Someone sends malware that targets Exchange Servers. That's a risk. Most people would go on and talk about the risks: "We believe the risk is high and therefore we think that these controls mitigate that risk and that they should be put in place."

A different conversation, a more modern risk management conversation that didn't use the word "risk" would be: "The potential impact that we see in the operation of the Exchange Server from this malware are X amount of dollars, from between $10,000 and $10 million, possible impact. Those losses stem from productivity losses, replacement losses if we can't meet certain objectives. Also, there will be response costs, because we might have to pull in an incident response team. There may be privacy concerns, and we may have fines and judgments from various regulatory bodies."

If you can get through those sorts of conversations -- and if you can do that repeatedly for a month and never use the word "risk" -- you've won.

About the author:
George V. Hulme writes about security and technology from his home in Minneapolis. You can also find him tweeting about those topics on Twitter at @georgevhulme.