If you are considering becoming a 1M/1M premium member and would like to join our mailing list to receive ongoing information, please sign up here.

Subscribe to our Feed

Trusting Untrusted Computers (Part 2)

Posted on Sunday, Sep 21st 2008

By Taher Elgamal, Guest Author

In the first article of this series I introduced the dilemma our networked world faces: how can we trust the computer software and hardware we depend on, even though these systems are untrustworthy? Let’s now think of a long-term strategy that would lead us to trust our networked environment more.

It is important to note that the information security industry has focused for the most part on security-specific components, while what customers really struggle with is overall trust in the infrastructure. The important issue in analyzing the security of an overall system is to identify the weakest link, which is usually a difficult task.

Getting back to the issue at hand, we can see the early signs of a movement to improve the security of connected components. Perhaps the first sign is the growth of the authentication industry. Authentication should be seen as an example of a general solution. If every component can validate the identity of whoever is calling it, then we can have more trust in the overall system. This can be generalized to the following concept:

If every component can validate the authenticity and integrity of all incoming information, then many of the issues with malware and similar problems will be eliminated. If each software routine can check the validity of the data being presented, then many of the issues we face today will either disappear or become much easier to deal with. We don’t teach our software engineers to write software in a complete and secure way, which is the main explanation for many of the trust issues we face. The recent movement to provide more security training to engineers is helpful, but I think that the current model of software architectures needs to be revisited. For example, checking for buffer overflow-type problems is now common practice, so why don’t we advocate that all software routines check all their input status prior to processing any of it. Examples of tasks to perform are:

  • Keep track of who calls and what type of values they use
  • Match the logical, physical location of the calling entity with expected values
  • Validate all input ranges and check for unexpected values
  • Check with central authorities if particular patterns are suspect, and cache copies of important patterns

If each routine can perform a small portion of what overall monitoring systems do today, then the combination of information we can collect can quickly empower the network to determine the validity of systems and entities that communicate with us. In fact, quicker decisions can be made since the information that can be obtained is as close to real time as possible.

Distributing the information-gathering and decision-making processes among multiple portions of the network makes it much harder for malicious or badly written programs to penetrate and harm our networks.

This segment is part 2 in the series : Trusting Untrusted Computers
1 2 3 4

Hacker News
() Comments

Featured Videos