Proposed Solutions to the questions in the book Distributed Systems - Principles and Paradigms (2002) By Andrew S. Tanebaum & Marteen Van Steen
Chapter 1 Problems
- The price/performance ratio of computers has improved by something like 11 orders of magnitude since the first commercial mainframes came out in the early 1950s. The text shows what a similar gain would have meant in the automobile industry. Give another example of what such a large gain means.
- Name two advantages and two disadvantages of distributed systems over centralized ones
- What is the difference between a multiprocessor and a multicomputer?
- The terms loosely-coupled system and tightly-coupled system are often used to describe distributed computer systems. What is the difference between them
- What is the difference between an MIMD computer and an SIMD computer?
- In the aircraft industry , if jets had also improved at the rate at which computers had improved then we would have had superb jets such as the
A380, a superjumbo jet carrying way more than 555 passengers and costing the price of a bicycle.
- a)Advantages :-
- i)Price/performance ratio: Distributed systems have a much better price/performance ratio than a single large centralized system would have .
(it is stated in the text book that the most cost-effective solution to acquiring the highest clock speed possible, i.e. rather than buying an expensive CPU with a very high clock speed, you should rather buy many cheaper CPU such that their combined clock speed will be higher )
-It is also to be noted that a collection microprocessors cannot only give a better price/performance ratio than a single mainframe but they can also yield an absolute performance that no mainframe can achieve at any price.
-The text outlines a supporting example to this point in that one could build a distributed system from 10,000 modern CPU chips with each chip running at 50MIPS(Millions of Instructions Per Second ) to obtain a total performance of 500,000MIPS. For a single processor to achieve such performance, it would have to execute an instruction at 0.002 nanoseconds(2 picoseconds). Following Einstein's theory of relativity, light can cover only 0.6 mm in 2 picoseconds and nothing can travel faster than light and hence practically a computer which executes instructions at a speed of 2 picoseconds which is fully contained in a 0.6mm cube would generate so much heat that it would melt instantly.
- ii)Inherent distribution: In today's world most large systems are fundamentally distributed in nature and hence we need distributed applications to implement such systems. One could take example from any large system such as Microsoft , this company has many branches around the world and each branch carries out individual functions in its location. Each branch has its own management hierarchy and the top management of each branch is answerable to the top management of the central branch. This organized hierarchy makes collection and processing of information much more organized . Records of individual branch operations could be stored on a local computer(s) rather than storing it at the Microsoft HQ. In cases wherein the central HQ will want updates from each of its branches, we could implement the entire company as a Commercial Distributed System by making the entire Microsoft system look like a single computer to the application programs , but implement the system in a decentralized manner with the local computer(s) in each branch.
- iii)Computer supported cooperative work: Distributed systems can be used to enhance "computer supported cooperative work " which can help geographically separate workers in a joint project come together and work to obtain their objective, irrespective of their individual locations .
- iv)Reliability :Distributed systems will present higher reliability than the centralized ones. The reason being that failure in some component chips of the distributed system will have a lower probability of leading to an overall failure of the system (even though it may bring about a consequent loss in the overall systems performance). The textbook goes ahead to suggest distributed systems as the best consideration in the case of critical applications such as controlling nuclear reactors wherein a higher reliability will be of great necessity.
- v)Incremental growth :The final advantage is that of incremental growth. any organization, wanting to increase its computing power will find it relatively possible to with a distributed system to simply add more processors to the system as need arises unlike a company with a single mainframe which in the case of expansion will either have the risky options of incrementing its number of mainframes of replacing its original mainframe by a new, more powerul one.
- b)Disadvantages:-
- i)Software required:-
Distributed systems will need not only very different software from the centralized systems but such software will have to contain new and different features from that required by centralized systems.
Nb//w.r.t to the software , an example is given on the operating systems necessary for distributed systems which are still said to begin to emerge
- ii)The risk with the communication network:
Once a distributed system displays a high rate of dependency on its communication network, any problem with the network can negate a great proportion of the system's advantages. When the communication network becomes saturated, it must either be replaced or a second one must be added this can require both financial and material costs. Messages can be lost via the communication network(in which case special software will be required to retrieve them).
- iii)Security:-
With the ease of sharing data across a distributed system comes the risk of people easily accessing restricted data. Hence it is advisable to store secret/restricted data out of the network and if such data is in any storage media, the media should be possibly locked in a secure safe.
- -Nb// Both are types of MIMD computers
Multiprocessor | Multicomputer | |
a) | These are MIMD computers that have shared memory (i.e. all the component computers share the same memory) | These are MIMD computers that do have shared memory |
b) | In a multiprocessor, there is a single virtual address space which is shared by all its constituent CPU's. Hence if any constituent CPU is to write the value 40 to address 900, any other constituent CPU which is reading from the address 900 will subsequently get the value 40. | In a multicomputer, every machine has it own private memory. Hence if a constituent CPU writes the value 40 to address 900, when another constituent CPU reads address 900, it will get whatever value was there(i.e. that was previously stored at address 900 before 40 was written onto it ) before. The write of 40 by a constituent CPU does not affect the memory of other constituent CPU's at all. |
c) | Multiprocessors tend to be more tightly coupled than multicomputers (and this is mainly because they can exchange data at high memory speeds) | In general, multicomputers tend to be less tightly coupled than multiprocessors even though we can have exceptions such as come fiber optic based multicomputers which can also work at memory speeds. |
Loosely-coupled system | Tightly-coupled system | |
a)Message transfer delay | Here, there is a long delay experienced when a message is sent from one computer to another | Here, there is a short delay experienced when a message is sent from one computer to another |
b)the data rate | Here, the data rate is low, i.e. the number of bits per second that can be transferred is low. | Here, the data rate is high, that is the number of bits per second that can be transferred is large |
c)Frequent use | Loosely-coupled systems tend to be used as distributed systems(working on a single problem).NB//This is not always true | Tightly-coupled systems tend to be used more as parallel systems (working on a single problem) NB//This is not always true |
MIMD(Multiple Instruction stream, Multiple Data stream) | SIMD(Single Instruction stream ,Multiple Data stream) | |
a) | Here, each computer is independent with its own program counter, program and data | Here all the processors have a single instruction unit which commands many data units (the data units carry out their single instruction in parallel) and each data unit has (and works with) its own individual data. |
No comments:
Post a Comment