Stuxnet. Few know positively where it came from. It was clever enough to take down a country’s nuclear program; hidden enough to slip into Windows computers around the world before reaching its intended target; covert enough to cause massive damage before being detected; and versatile enough to be repurposed to attack again.
The average malware program that causes obnoxious pop-ups runs 10 to 15 kilobytes. Stuxnet was 500 kilobytes, with lean and efficient coding. The malware created ghost files to hide itself while it was programmed to search for opportunities to spread from computer to computer until it found its intended target, Iran’s Natanz nuclear reactor.
The code has been posted on forums around the world, available to all, and policymakers worldwide are fearful that a similar attack will be levied against their infrastructure.
Imagine a virus programmed to be triggered remotely, designed to covertly shut down systems while displaying business-as-normal data, planted in the United States’ electrical grid. Or flight traffic control systems. Or Wall Street.
If such an attack were to occur, how should we respond? When is the threshold for the right of self-defense met? If it is met, how and against whom do we retaliate? If an underground terrorist organization, or even a rogue government organization, were to sponsor such an attack, do we declare war against the entire country? With advances in technology to reroute and hide tracking information far outstripping attribution technology, can we even be sure that the groups we point fingers at are the actual perpetrators?
Then there is the problem of the “air gap.” Due to the drop in prices and increase of space on memory sticks, it is possible to transfer malicious programs by simply smuggling a flash drive into an otherwise secure facility and plugging it into a computer. It would be impossible to tell who was in control of the originating computer, because the first traceable computer would be the first one to be infected. The air gap, or the space between computers where data has to be transferred by humans carrying it across, effectively shields a competent hacker from identification.
Hacker 101 teaches that the weakest point in a network is the point at which human error and technology interact. Stuxnet was developed by someone with intimate insider knowledge of Windows and Seimens technology, and was introduced to the five gateway victim companies, from which they spread to the rest of the world, by exploiting a zero-day vulnerability that allowed viruses to sneak past Windows OS firewalls by transferring from a host memory stick to the victim’s computer. One company, although it was only attacked once, had three computers infected by the same memory stick, by users inserting it into one computer, then another, then another, oblivious to the payload it carried.
The U.S. Department of Homeland Security conducted a study in which they dropped memory sticks in the parking lots of randomly targeted businesses. The numbers were astounding: 60 percent of those who picked up one of the random memory sticks responded by putting it into a company computer; if the company logo was printed on the memory stick’s casing, the number jumped to 90 percent.
Cyberwar vs. Cybercrime
Just as physical violence occurs in a range of severities, from a slap on the wrist to genocide, so does cyberviolence. A malicious program can cause annoying pop-ups; or track your keyboard strokes; or scan your device for personal information; or can hack into secure financial sites. Hackers can cause easily repairable damage, such as the wild-romping Lulz Security that brought down the CIA’s public page and Anonymous’ DoS (Denial of Service) attacks; or their attacks can be more sinister, such as the unknown government that broke into the International Monetary Fund’s system to conduct surveillance on world financial markets. But this is not cyberwarfare – it is cybercrime, something financially motivated and that 75 to 80 percent of viruses are guilty of.
Cyberwarfare has only been truly seen in a few instances. Stuxnet was clearly an instance of cyberwarfare, but because of a chance detection before the final update, it was not completely successful. A more successful example, although perhaps one that has grown to exaggerated proportions, would be the 1982 CIA sabotage of a Siberian pipeline. According to a never-substantiated story, the CIA planted a logic bomb into software the Russians were purchasing from Canada for the pipeline, and at a preprogrammed point, valves in the pipeline began to malfunction, creating a pressure buildup that led to an explosion a fifth the size of the atomic blast over Hiroshima.
In our human experience, we have little to compare in our history with cyberwarfare on Stuxnet’s scale, except perhaps Grecian and modern-day Trojan horses. Therefore, we have no norms, no venerated volume of legal decisions, upon which to base our response, and the point at which cybercrime crosses into cyberwarfare is far from clear. In the year since the purpose of Stuxnet was decoded, there has been some attention paid to creating a standard of norms, and national governments and international organizations have called for a Declaration of Cyberconduct to delegitimize attacks on civilian targets, like hospital records; however, on most legal issues swarming around this, there is still almost a complete lack of clarity.
Decoupling is Futile
The United States, as well as most of the rest of the world, is currently in the mindset of responding to attacks as they come. And it does not take an expert in cybersecurity to tell you that this mindset is all wrong.
“We’re on a path that is too predictable, way too predictable,” Gen. James Cartwright, vice chairman of the Joint Chiefs of staff, told reporters at a Pentagon press conference.
The networks of cables and fiber-optic wires that connect the world are highly fragile, far more than most people want to admit. In April 2011, an old woman in a rural area of the country of Georgia was digging for copper to sell as scrap. Her shovel sliced through a bundle of fiber-optic cabling that had become exposed by heavy rains. The entire country of Armenia, and large portions of Georgia and Azerbaijan were without internet or television for five hours. Merely a month later, a wooden bridge in Germany was the target of an arson attack, and all the completely unprotected fiber-optic cables that ran under the bridge were destroyed.
We can perhaps dream about the day when internet and television die and we emerge into the unfamiliar sunlight, but for the companies that relied entirely on internet and telecommunications for business transaction, it was a disaster. In a large scale internet outage, hospitals would not be able to access patient files, emergency first responder systems would collapse, and business would grind to a halt.
In a recent poll conducted of electricity firm executives in fourteen countries, 40 percent said they believe there will be a major attack on the power industry in the next 12 months; 30 percent believe their company is not prepared for a cyberattack.
In the United States, it is quickly assumed that the Department of Homeland Security will have a plan for what the government’s response will be in case of a widespread attack and what is critical in case triage is required. They don’t.
“We don’t know what we want to be when we grow up,” says Seán McGurk, Director for the National Cybersecurity and Communications Integration Center at DHS. Congress has not decided what they think is “critical infrastructure,” let alone passed along a comprehensive preparedness plan, and it is very likely they will leave it up to bureaucrats to decide.
At this junction, something ingenious is happening in policy-making: the US government is sponsoring relationships and dialogues between the public and private sectors. Government has the broad view of policy, but private industry has the intimate knowledge of their field, and in the dialogue between the two sectors, the hope is that somewhere in the dialogue, some kind of defense strategy will emerge.
It is an idea that has had great success in the United Kingdom. Director of the Cyber Security Program at QinetiQ Anthony Dyhouse stated that companies were previously hesitant to report a cyberattack because they did not want to “air their dirty laundry in front of their competitors,” but the public-private dialogues has opened that up. Now, once one company comes under attack, with public reporting, the hope is that others will be able to lock down their systems enough to prevent a virus from spreading.
No one knows for sure if decoupling in that manner will be actually be possible in all sectors. A representative from Morgan Stanley related her fears to Warren Getler of the Bertelsmann Foundation, that the banking system was so closely bound together, banks would not be able to decouple from each other in an attack on their systems.
The global network is terrifyingly vulnerable to a shovel spade in the wrong place and the Son of Stuxnet alike. The U.S. Department of Defense is developing defensive strategies and standard operating procedures for a cyberwar attack, and public dialogue is finally awakening to the need for comprehensive cyberdefense plans. The real question? Who wins this race: cyberdefense policy, or the Eastern European hacker with Stuxnet’s code illuminated by computer screens before her?
a global affairs media network
The Terrifying Progeny of Stuxnet
October 20, 2011
Stuxnet. Few know positively where it came from. It was clever enough to take down a country’s nuclear program; hidden enough to slip into Windows computers around the world before reaching its intended target; covert enough to cause massive damage before being detected; and versatile enough to be repurposed to attack again.
The average malware program that causes obnoxious pop-ups runs 10 to 15 kilobytes. Stuxnet was 500 kilobytes, with lean and efficient coding. The malware created ghost files to hide itself while it was programmed to search for opportunities to spread from computer to computer until it found its intended target, Iran’s Natanz nuclear reactor.
The code has been posted on forums around the world, available to all, and policymakers worldwide are fearful that a similar attack will be levied against their infrastructure.
Imagine a virus programmed to be triggered remotely, designed to covertly shut down systems while displaying business-as-normal data, planted in the United States’ electrical grid. Or flight traffic control systems. Or Wall Street.
If such an attack were to occur, how should we respond? When is the threshold for the right of self-defense met? If it is met, how and against whom do we retaliate? If an underground terrorist organization, or even a rogue government organization, were to sponsor such an attack, do we declare war against the entire country? With advances in technology to reroute and hide tracking information far outstripping attribution technology, can we even be sure that the groups we point fingers at are the actual perpetrators?
Then there is the problem of the “air gap.” Due to the drop in prices and increase of space on memory sticks, it is possible to transfer malicious programs by simply smuggling a flash drive into an otherwise secure facility and plugging it into a computer. It would be impossible to tell who was in control of the originating computer, because the first traceable computer would be the first one to be infected. The air gap, or the space between computers where data has to be transferred by humans carrying it across, effectively shields a competent hacker from identification.
Hacker 101 teaches that the weakest point in a network is the point at which human error and technology interact. Stuxnet was developed by someone with intimate insider knowledge of Windows and Seimens technology, and was introduced to the five gateway victim companies, from which they spread to the rest of the world, by exploiting a zero-day vulnerability that allowed viruses to sneak past Windows OS firewalls by transferring from a host memory stick to the victim’s computer. One company, although it was only attacked once, had three computers infected by the same memory stick, by users inserting it into one computer, then another, then another, oblivious to the payload it carried.
The U.S. Department of Homeland Security conducted a study in which they dropped memory sticks in the parking lots of randomly targeted businesses. The numbers were astounding: 60 percent of those who picked up one of the random memory sticks responded by putting it into a company computer; if the company logo was printed on the memory stick’s casing, the number jumped to 90 percent.
Cyberwar vs. Cybercrime
Just as physical violence occurs in a range of severities, from a slap on the wrist to genocide, so does cyberviolence. A malicious program can cause annoying pop-ups; or track your keyboard strokes; or scan your device for personal information; or can hack into secure financial sites. Hackers can cause easily repairable damage, such as the wild-romping Lulz Security that brought down the CIA’s public page and Anonymous’ DoS (Denial of Service) attacks; or their attacks can be more sinister, such as the unknown government that broke into the International Monetary Fund’s system to conduct surveillance on world financial markets. But this is not cyberwarfare – it is cybercrime, something financially motivated and that 75 to 80 percent of viruses are guilty of.
Cyberwarfare has only been truly seen in a few instances. Stuxnet was clearly an instance of cyberwarfare, but because of a chance detection before the final update, it was not completely successful. A more successful example, although perhaps one that has grown to exaggerated proportions, would be the 1982 CIA sabotage of a Siberian pipeline. According to a never-substantiated story, the CIA planted a logic bomb into software the Russians were purchasing from Canada for the pipeline, and at a preprogrammed point, valves in the pipeline began to malfunction, creating a pressure buildup that led to an explosion a fifth the size of the atomic blast over Hiroshima.
In our human experience, we have little to compare in our history with cyberwarfare on Stuxnet’s scale, except perhaps Grecian and modern-day Trojan horses. Therefore, we have no norms, no venerated volume of legal decisions, upon which to base our response, and the point at which cybercrime crosses into cyberwarfare is far from clear. In the year since the purpose of Stuxnet was decoded, there has been some attention paid to creating a standard of norms, and national governments and international organizations have called for a Declaration of Cyberconduct to delegitimize attacks on civilian targets, like hospital records; however, on most legal issues swarming around this, there is still almost a complete lack of clarity.
Decoupling is Futile
The United States, as well as most of the rest of the world, is currently in the mindset of responding to attacks as they come. And it does not take an expert in cybersecurity to tell you that this mindset is all wrong.
“We’re on a path that is too predictable, way too predictable,” Gen. James Cartwright, vice chairman of the Joint Chiefs of staff, told reporters at a Pentagon press conference.
The networks of cables and fiber-optic wires that connect the world are highly fragile, far more than most people want to admit. In April 2011, an old woman in a rural area of the country of Georgia was digging for copper to sell as scrap. Her shovel sliced through a bundle of fiber-optic cabling that had become exposed by heavy rains. The entire country of Armenia, and large portions of Georgia and Azerbaijan were without internet or television for five hours. Merely a month later, a wooden bridge in Germany was the target of an arson attack, and all the completely unprotected fiber-optic cables that ran under the bridge were destroyed.
We can perhaps dream about the day when internet and television die and we emerge into the unfamiliar sunlight, but for the companies that relied entirely on internet and telecommunications for business transaction, it was a disaster. In a large scale internet outage, hospitals would not be able to access patient files, emergency first responder systems would collapse, and business would grind to a halt.
In a recent poll conducted of electricity firm executives in fourteen countries, 40 percent said they believe there will be a major attack on the power industry in the next 12 months; 30 percent believe their company is not prepared for a cyberattack.
In the United States, it is quickly assumed that the Department of Homeland Security will have a plan for what the government’s response will be in case of a widespread attack and what is critical in case triage is required. They don’t.
“We don’t know what we want to be when we grow up,” says Seán McGurk, Director for the National Cybersecurity and Communications Integration Center at DHS. Congress has not decided what they think is “critical infrastructure,” let alone passed along a comprehensive preparedness plan, and it is very likely they will leave it up to bureaucrats to decide.
At this junction, something ingenious is happening in policy-making: the US government is sponsoring relationships and dialogues between the public and private sectors. Government has the broad view of policy, but private industry has the intimate knowledge of their field, and in the dialogue between the two sectors, the hope is that somewhere in the dialogue, some kind of defense strategy will emerge.
It is an idea that has had great success in the United Kingdom. Director of the Cyber Security Program at QinetiQ Anthony Dyhouse stated that companies were previously hesitant to report a cyberattack because they did not want to “air their dirty laundry in front of their competitors,” but the public-private dialogues has opened that up. Now, once one company comes under attack, with public reporting, the hope is that others will be able to lock down their systems enough to prevent a virus from spreading.
No one knows for sure if decoupling in that manner will be actually be possible in all sectors. A representative from Morgan Stanley related her fears to Warren Getler of the Bertelsmann Foundation, that the banking system was so closely bound together, banks would not be able to decouple from each other in an attack on their systems.
The global network is terrifyingly vulnerable to a shovel spade in the wrong place and the Son of Stuxnet alike. The U.S. Department of Defense is developing defensive strategies and standard operating procedures for a cyberwar attack, and public dialogue is finally awakening to the need for comprehensive cyberdefense plans. The real question? Who wins this race: cyberdefense policy, or the Eastern European hacker with Stuxnet’s code illuminated by computer screens before her?