Archive for March, 2010
I’ve been following the data breach that occurred at HSBC Private Bank in Switzerland. Seems that an employee stole data on 24,000 accounts over three years ago, but the details of the breach weren’t clear to the company until earlier this month when the Swiss government returned data files back to the bank.
That type of lengthy delay is unacceptable. Forget for a moment the possible resulting impact to an organizations bottom line that a data breach can have. Instead, think about the individuals that have been violated by either negligence or cybercrime. They deserve to know and in a timely fashion.
An organization must have clear visibility into all data interactions, including files, events, people, policies and processes. Best-in-class managed file transfer solutions include tamper-evident cryptographic audit logs, as well as easy archival and retrieval of all transferred files and personal messages that were sent back and forth. No security can ever be perfect, but the correct audit capabilities mean that losses can be clearly understood without delay.
One last piece of advice to companies that fall victim to a breach: Don’t keep it to yourself. Standard procedure for data breach recovery should be to quickly identify the severity of the breach… And affected individuals have a right to know that sensitive information about them has accidently been compromised.
I recently received an inquiry from a reporter that read like this:
“Are you comforted, or left cold when you hear a product has FIPS 140-2 validation that guarantees it’s implementing encryption modules correctly? Assuming secure data transmission or storage is important in the use case, is this buzzword bingo or a valuable asset?”
My reply to this inquiry was uncharacteristically short:
“Today, fully validated FIPS 140-2 cryptography modules come free or bundled with your OS, your Java runtime, several application packages and some hardware components. These implementations are typically available for your own applications through well-documented APIs.
“Not using FIPS 140-2 cryptography in the year 2010 is like opening a savings account at a bank without the FDIC’s $250K-per-account guarantee. You could do it, and it might work, but why take the risk when a safer option is available for no extra charge?”
And so it shall remain: Ipswitch File Transfer products use FIPS 140-2 cryptography to protect data-in-transit and data-at-rest, and will continue to do so until FIPS 140-3 becomes the new law of the land.
Jonathan Lampe, VP of product management at Ipswitch, Inc., the leading developer of comprehensive secure and managed file transfer solutions, will be presenting at the (ISC)2 Secure San Antonio Conference. His session – “When Data Moves, Do You Listen?” – will shed new light on the challenges companies face when enforcing and monitoring consistent file transfer policies.
According to Gartner, 80% of the data individuals move is in the form of a file transfer. Whether sent through an FTP upload, an email attachment or a Web download, organizations need to know exactly what was sent, who sent it and who received it, especially when external parties are involved.
Proving the integrity of the data, the fidelity of the credentials and the consistency of the record is also important. Lampe’s session will offer best practices for ensuring security, visibility and compliance – while arming companies with the knowledge they need to overcome the biggest hurdles.
|WHAT: Presentation: “When Data Moves, Do You Listen?”|
|WHO: Jonathan Lampe, VP of product management|
|WHEN: Tuesday, March 16, 2010 at 11:15 a.m.
WHERE: (ISC)2 Conference 2010, San Antonio, Texas
“Why are we still FTP’ing files to each other in 2010?”
That is one of the philosophical questions I get to ponder almost once a week as I chat with my colleagues in the industry. Part of the answer is easy: “Almost everyone has or knows about FTP.” Based on that answer, a number of secure variants on FTP (SFTP, FTPS, even our own command-line MOVEit Xfer client) have emerged, along with extensions to the core FTP command set itself.
But why bother moving FILES around when we could all be doing little bitty TRANSACTIONS to each other using SOAP or other transactional-friendly schemes? The answer to that question didn’t come to me until I’d spent several years in the field, traveling between banks, data centers and large corporations in support of distributed, enterprise-class file transfers.
In the 1990′s the local branch of your bank worked something like this. At the end of every business day, after all the customers had left, the tellers would compare the cash in their drawers against what the accumulated transactions of the day on the computer said should be there. During this reconciliation process, adjustments might be made to the record of the day to explain the discrepancies – essentially adding extra transactions after the bank was closed. However, these transactions often did NOT occur in real time. Instead, after all balancing was done and local management was satisfied with the result, a fixed set of files with the branch bank’s “final answer” was sent in to the home office, and everyone went home for the night.
So why did/do bank use files for this workflow instead of transactions? Why did their operations experts only ask branches to send in a single set of files?
- It hid the complexity of the bank’s central systems from branches. Branch managers didn’t have to worry about this to this system and that to that system, each with it’s own error codes: they just sent the files and went home.
- It was less risky for the branch managers and their staff. Branch managers didn’t have to worry about a misbehaving back-end system keeping their tellers on for an extra hour: they just sent the files and went home.
- It let central management put faith in the numbers. When a branch sent in its final report, central management knew that its numbers had undergone local verification, and that its numbers were not going to be superceded by any “last minute” transactions.
Boiled down, the reasons large FILE transfer was used in this interaction (instead of small TRANSACTIONS) was to hide the complexity of systems on both ends, reduce the risk of transmission failure and to increase the fidelity of the overall operation. Whenever you find similar “do good work, certify it and throw it over the wall” workflows in business processes, the opportunity to solve those workflows with secure and reliable file transfer usually exists.
(Will file transfer and transaction-based architectures ever converge? I think they already have begun to – look for more on that in future posts!)
One of the hot debates among cloud watchers has been whether cloud vendors will someday federate and provide transparent services across continental boundaries. Microsoft provided an interesting twist to this debate just before the RSA Conference kicked off here in San Francisco.
As noted by Gavin Clark in The Register:
“Among the features (in Microsoft’s latest U.S. government cloud offerings) are secured and separate hosting facilities access, to which is restricted to a small number of US citizens who have cleared rigorous background checks under the International Traffic in Arms Regulations (ITAR).”
In other words, Microsoft has defined a large private cloud segment that will never span political boundaries. However, not every Federal process must comply with ITAR or even the higher levels of FISMA. It will be interesting to see whether other cloud vendors follow suit with their own private offerings or if private government clouds restricted to and maintained in a single country are just a niche.