1. a.) Due to the fact that there is no clock synchronization among servers in Bayou, there must be a way to determine a serialization order for writes. Thus, a central server must inspect all writes and stamp them with a monotonically increasing value, thus imposing a total ordering on writes. b.) Two reasons: 1.) Eventual consistency of the system -- if all writes were suspended for a long enough period of time, eventually all server's databases should be identical. 2.) The total ordering allows Bayou to truncate server's write logs, which reduces the amount of disk space consumed by the system. c.) Here are some reasons: 1.) Central point of failure. 2.) Low availability -- what if the network is partitioned? 2. a.) 1.) TCP was designed for wired networks where packet loss is an indication of network congestion. Wireless channels, however, are less reliable than wired links due to higher bit error rates and other factors, so packet loss is not necessarily due to congestion. Since TCP interprets all packet loss as congestion, it will shrink the congestion window. 2.) Bursty errors in the wireless environment can cause multiple packet losses in the congestion window. Mary's example in lecture shows how multiple losses will then go on to cause a coarse-grained timeout b.) 1.) Fluctuation and shrinkage of the window throttles the bandwidth of a flow and under-utilizes the true available bandwidth of the end to end connection. 2.) Coarse-grained timeouts cause TCP to enter into slow start. This chokes bandwidth by setting the congestion windows size back to 1. c.) 1.) ELN marks ACKs for packets that have been lost due to non-congestion related reasons. The sender then knows not to shrink the congestion window. This solves problem #1. 2.) Selective ACKs acknowledge up to 3 non-contiguous ranges of packets. This means that the sender is explicitly aware of multiple losses in a congestion window and can remain in fast retransmit until all lost packets have been resent. This solves problem #2. 3. a.) IP-in-IP decapsulation means that the mobile host can encapsulate a standard Mobile IP packet where source = HA's IP, destination = CH's IP in an IP packet with source = MH's local IP, destination = CH's IP. Because the source IP is located within the foreign network, the foreign network's ingress filter will allow it to pass. Therefore, the mobile host can directly communicate with the CH, avoiding costly bidirectional tunneling. b.) IP-in-IP encapsulation means that the CH can encapsulate a packet with source = CH's IP, destination = HA's IP in a packet with source = CH's IP, destination = MH's local IP. Thus, the CH can bypass the HA, talk directly to the mobile host and avoid triangle routing. c.) In this case, we don't need Mobile IP at all -- the CH and MH can communicate directly with regular IP. 4.) Here are Armando's grading guidelines: (a) I was looking for as many as possible of the following: - some transformations do become obsolete, but small displays still necessitate other transformations - cryptography becomes easier so more options in doing secure access - client-side code (applets, scripting) becomes more prevalent - large memory size enables aggressive client caching (NOTE: NOT proxy caching! that's done today, and will likely continue to be done for reasons that have nothing to do with the performance of the client.) - aggressive compression easier to do if fast CPU - if clients continue to increase in sophistication, they'll place more demand on servers, and proxy still serves to diffuse load from heavily-loaded servers - service composition/aggregation should be done at proxy if network to the client is slow Basically, you got full credit for mentioning 3 or more of these; 3 points for mentioning 2 of them; 2 points for mentioning one of them if it specifically deals with how increased CPU and memory would affect the design of a client-proxy-server architecture; 0 or 1 otherwise. Note, simply mentioning other things that proxies do *today* (eg move complexity of adaptation away from servers) isn't enough, since the question asks how an *improvement in the clients* would affect the design of the system. b) I was looking for these, with similar grading as above but a little more latitude since this one was tougher: - aggressive client prefetching (no need to cache if bandwidth is free) - content adaptation, format transcoding maybe still needed (small screen, inability to parse complex formats) - can probably download "raw" bitmaps without having client do ANY layout work, i.e. move the rendering task more aggressively toward the proxy - can do streaming video for small screen, without complex decoding - can do "information hiding" to enhance privacy (mix legitimate info in with garbage, and let the proxy do the sifting out; this was proposed a couple of years ago by Ron Rivest and others, "security without cryptography") - compute-intensive input modalities like gesture recognition or speech recognition could be done: raw bits sent up to proxy for decoding there (if latency is good enough) - proxy can act as a compute/storage server since bandwidth is high enough to transport large chunks of data around (ie like the X server/client model) - prevent overloading of servers, esp. given high bandwidth clients Again, mentioning things that proxies do *today* is not enough, unless you also said how that functionality would be affected by relaxing assumption (ii). (c) Here I was looking for these, and in particular, I was looking for you to say something you *didn't* already say above (unless your answers to the above were pretty exhaustive...) - proxies essentially become aggregation/composition points - possibility for extensive customization beyond what is practical for a single site/device - economy of scale and increased availability resulting from offloading from servers, eg caching, content distribution - continue to serve legacy systems - if wired and desktop technology also continues to increase, proxies will still be needed to "fill the gap" - provide anonymity, firewall for client, group-state repository for collaborative work - "transport conversion" functions such as MPA proxies can do