Archives for category: Internet

There is an app for everything! Or at least for nearly everything. This means, traditional search (google, bing, …) will face some changes. We will not use search anymore to find cheap offers for a flight, a product review, or whatever else. For things that we do more or less frequently we will have an app do this. Hipmunk, expedia, tripadvisor, and many many more. These dedicated apps will be our starting point, at least from our mobile and from the tablet. As a result, search and related advertising might see a problem.

This also includes the desktop. For many things which we have been doing in the browser, we see more and more dedicated apps popping up: evernote, wunderkit, and many others give you an app for your desktop. As someone who has never been a big fan of doing everything in the browser, I am very happy to see us switching back from the browser to apps.

Advertisements

It is interesting to see how the social networks (facebook, twitter, google+,…) are evolving when compared to the original Internet: The social networks are a closed box, it is not really possible to connect one network to another. In contrast, IP networks or applications like email are designed open. Thus, any company (or private user) can connect to the Internet and communicate via open protocols. Or one can run its own email server that connects to other email servers via well-defined protocols.

Now, I know, that the user base of social networks to some extend is their capital. Hence, a lock-in maybe be wanted to a certain degree. However, when compared to the history of the Internet, I wonder whether this lack of interoperability etc. will hinder innovation and evolution. Based on its open architecture the Internet (and the PC) evolved quickly and provided a great platform motivating and fostering others try new ideas. Now, the APIs around the social networks also allow such new ideas, but they seem to focus on the “around”. Hence, the access to the core network is limited.

Overall, I wonder whether this closed architecture will hinder evolution and innovation in the long run. Maybe, once their usage patterns and applications stabilize, we can find a way to deeply interconnect the different social networks. This might also allow one to choose a social networks based on criteria such as privacy and functionality and reduce the pressure of being in one particular network just because everybody else is there, too.

Side notes:

1. I know that companies in the social network world are highly innovative companies. Running their own systems allows them to deploy new ideas quickly. Note, that my thoughts are more general.

2. This is not meant as a call for standardization: standardization processes as we know them today maybe too heavy-weight for this fast-moving area.

Some thoughts on the browser and  the OS: I believe that its is time to move applications out of the browser. There are too many things that we do inside our web browser, that we could do otherwise easier, better, and at less CPU load and network traffic.

The mobile devices show us the way to go: There are little apps for everything. With the App store for MAC OS, Apple also brings this concept to the desktop and laptop (Note, that Linux essentially had an App store since the first days: apt, yast, yum, and whatever they are called).

To me, the core benefit of these App concept is that I get a little application that is tailored to the goals of the application. Hence, it does not have to deal with the limitations of the browser in terms of languages, html, java script, ajax, flash, lack offline support, and until recently the lack of right clicks. It is just so nice to have a couple of little windows (standalone): one for email, one for notes (I love Evernote), one for Blog writing (such as Windows Live Writer).

In Google Chrome and especially Chrome OS we see Google addressing some of the issues by deeply integrating the OS and the browser. However, stuff still seems to be bound to the browser (and html etc.). Now, I know that html forms a great platform abstraction layer (you can view it essentially on all systems, throw in a mobile site of your stuff and it can be conveniently viewed on smartphones, too) and also forms a second “narrow waist” above IP (there is a HotNets paper on this). However, all this loading, rendering, scripting, is just too inefficient (sure, Ajax etc. help you with it, but still, this is quite a beast).

So, what does this mean for the browser and the OS. From my point of view, the OS (or services on top it), need to provide two features: (1) a Just In Time Compiler (a nice JIT compiler, best with Hotspot support) and (2) a html renderer. Both shall be common services shared across multiple active applications. The Compiler shall handle the efficient execution of all the scripted high level languages we are using today: JavaScript, Python, Flash (and all other Adobe stuff), etc.. Additionally, it can deal wit the byte code of Java, C# (and .Net in general), and all the other stuff. LLVM, .Net, and the new Java core show that this can be done nicely. The apps may use html inside for visualization where appropriate (see Evernote etc.). Hence, the html renderer will parse their html as well as normal web browsing sessions.

I believe that it is important to make both the renderer and the JIT essential services of an OS. Using java shows how heavyweight it is to startup a such a compiler just for a single little application. Similar effects can be noticed when starting up all the different browser you have installed (especially, as many of them are not only html renderer but also JIT compilers for JavaScripte etc.). Furthermore, all this sandboxing and protection that modern browser do, they often tend to double functionality that is provided by the OS anyway.

What does this mean for the OS: looking at the JIT aspects maybe it is time to make Singularity (One of my favorite papers – or better series of papers) or Java OS reality. From the html renderer perspective, I believe chrome OS is on a good may, they just have to hop more on the App concept and leave the browser behind.

As usually at SigComm, all papers where really good. It was hard to pick my three favorite ones, here are the ones I have chosen today (Tomorrow this might look different):

  1. Efficient Error Estimating Coding: Feasibility and Applications (BEST PAPER): “Without actually correcting the errors in the packet, EEC enables the receiver to estimate the fraction of corrupted bits in the packet, which is perhaps the most important meta-information of a partial
    packet.” The paper shows how this information can be used for video streaming etc. where it is not so important that all bits are received correctly. Hence, EEC is somewhere between ECC (error correction codes) and CRCs on the other hand.
  2. SourceSync: A Distributed Wireless Architecture for Exploiting Sender Diversity: “SourceSync enables concurrent senders to synchronize their transmissions to symbol boundaries, and cooperate to forward packets at higher data rates than they could have achieved by transmitting separately. The paper shows that SourceSync improves the performance of opportunistic routing protocols.” Ok, this is cool: in classic opportunistic networks we exploited receiver diversity, now we do the same with the senders. I am wondering if this would also work for Wireless Sensor Networks. However, the tight synchronization requirements (down to the symbol level) seem to make it at least challenging.
  3. Understanding Block-level Address Usage in the Visible Internet: “We have little information about the edge of the network. Decentralized management, firewalls, and sensitivity to probing prevent easy answers and make measurement difficult. Building on frequent ICMP probing of 1% of the Internet address space, we develop clustering and analysis methods to estimate how Internet addresses are used.” This paper gives interesting insight in the edge, i.e., end hosts, of the Internet. I like this paper for a special reason: all measurements they do is to ping 1% of the Internet addresses. From this simple information they then smartly draw their conclusions. 

Listening to the SigComm talks, I realized that there are two playgrounds where people can apply tons of clean slate research to today’s networks and (still) do everything new and shinny : Data centers and Wireless Sensor Networks…

1. Breathe to Stay Cool: Adjusting Cell Sizes to Reduce Energy Consumption: Adapt cell coverage (and even big others bigger and turn off some) according to network load to make cellular networks more energy efficient. Interesting, this seems to align well with work going on at Ericsson and probably others.

2. Reducing energy consumption in IPTV networks by selective pre-joining of channels: How to safe energy in IPTV networks, but results are not really promising. But a very nice talk, as always from Jon Crowcroft.

3. Energy Proportionality of an Enterprise Network: Build power models for your network infrastructure and use SNMP to measure the current configurations and operation modes (traffic rates etc.) to estimate the overall power consumption. Use this information to optimize your network.

1. Greening Backbone Networks: Reducing Energy Consumption by Shutting Off Cables in Bundled Links: Energy consumption off routers seems to depend on whether the link is on or off and not depend on the actual load. Hence, you should turn off links to safe energy and to keep the others well loaded.

2. How Internet Concepts and Technologies Can Help Green and Smarten the Electrical Grid: How Grids can be be made green by learning from the Internet. Very interesting talk. Current problems in the grid: storage, over provisioning, loss in distribution (lines, transformation). However, the grid of tomorrow looks different, it will change: people start producing energy, too (wind, solar) and hence to not only consume (bidirectional energy flow), store energy, and collect energy information on energy usage at high detail (smart meters). This causes funny problems: with solar cells and wind mills in each garden we have millions of not really predictable sources. This results in a complex distribution / backhaul of two way flows: energy sinks now become a source, too. Furthermore, wind and sun are strong were people and industry are typically not located: in the desert, at the sea.
The talk suggest to use Internet techniques such as Peer-To-Peer etc. to solve grid problems and identifies interesting similarities between power grids and the Internet. For example, they have similar problems / challenges: heterogeneous, critical to society, ossified. Furthermore, they share some similarities: Simple API (simple plug vs. simple protocol known as IP) … but also have nice difference as electrons do not have headers and are not routed in a “packet” based fashion.  Also, storage of energy if much more difficult than storage of data. For more, please read the paper, this talk is too interesting to type and listen.

3.How Green is IP-Telephony? Compare P2P relaying to a centralized relaying. But the interesting question was left open (future work) compare to classic PSTN network.

4. Shipping to Streaming: Is this shift green? Interesting talk, but not really surprising: When you do your data center right, streaming is more energy efficient.