Massimo Marchiori's Past Short Bio

<compressed-past-bio>

[for the last activities, look in the home page]
Received a B.S. degree with highest honors, and an M.S. degree in Mathematics summa cum laude, where the lowest mark throughout all the studies has never been below the maximum (30/30). Received then the Ph.D. in Computer Science with a thesis that won an EATCS (European Association for Theoretical Computer Science) best Ph.D. thesis award of the year for the invention of local analysis in programming languages.

Worked at the University of Padua (Italy), at CWI (The Dutch National Research Center), and at the MIT Lab for Computer Science (USA) in the Computation Structures Group.

Now research staff in the World Wide Web Consortium (W3C), in the headquarters located at MIT, Cambridge, MA, USA, and also research professor in Computer Science at the University of Venice, Italy.

Research interests include World Wide Web and Intranets (information retrieval, search engines, metadata, web site engineering, web advertisement, and digital libraries), programming languages (constraint, visual, functional, and logic programming), visualization, genetic algorithms, neural networks, and rewriting systems. Published papers (no, hopefully not the usual self-replications dictated by the "publish or perish" rule...;) on all the above topics in various journals and proceedings of international conferences.

Got a variety of Awards, including the Gini Foundation Award for innovative research, the IBM young scientist award, the Lifetime Membership Award of the Oxford Society for "his lifetime achievements, and the efforts for the development of an XML Query standard".

Solved a variety of open research problems, like:

Been the beginner in some important fields:

Modularity: the inventor of the theory of optimal modular analysis for complex systems ("local analysis").
Complex systems (as the word says... "complex") are too heavy to analyze as a whole, so what happens is that people can try to apply modular analysis to them: this means, studying smaller pieces of the system, and from that inferring properties of the whole complex system. Before local analysis, this was more of an art, with no formal understanding on the reasons, and above all the limitations, of why/when studying the "small" was better than studying the "large". Local Analysis provided the formal setting to understand the differences, and enabled to study precisely what are the limits of the "small versus the large". A byproduct of Local Analysis has been the Theory of Vaccines, showing how the best analysis in the small can be obtained by modeling the idea of "vaccine" (common to medicine) within the apparently unrelated field of computer science.
Web Search Engines: the first to introduce the concept of hyperinformation in search engines, idea later developed for example in Google
There has been a big shift in the way search engines like Google work today, due to the quantum leap of considering so-called "hyper-information", instead of just normal "textual information", which had been used in the past. Hyperinformation has been introduced in 1996 (original report, first hypersearch engine built and running, submission to WWW6), and then has been also accepted at WWW6 (final publication in 1997), with a critical analysis of the current situation at the time. Hyperinformation can be used in two ways, one is to measure "visibility", the other is to measure the "real potential" of a page. The first is more biased, but can have some computational advantages. The second is what I indicated how the best one to use. After the presentation about hyperinformation at WWW6, Larry Page and I had a nice table discussion on the pros and cons of the two ways of using hyperinformation. Next year, Page and Brin wrote a paper about PageRank (correctly citing the hyperinformation predecessor, although not mentioning the obvious similarities...), which is essentially a simplified "visibility" way of using hyperinformation (the dual of the "potential"), and soon after, developed Google basing it on this hyperinformation measure.
Web Advertisement & Search Engines: the first to formally introduce and study the problem of search engine persuasion (sep)
Nowadays, concepts like "search engine optimization", "search engine spam" and so on are well-known: these things describe how to artificially "pump up" web pages in the top ten of search engines. There are lot of companies that charge you for such services, and all sorts of techniques are employed.
The first study of such problem, which is more formally called sep (for search engine persuasion) appeared back in 1997, with the article "Security of World Wide Web Search engines", that for the first time formally introduced the concept, and explained and classified the possible techniques and solutions. Amazingly enough, what looked like visionary science-fiction in 1997 (the same concept, and techniques like invisibility and so on) got realized years later, and is now common-place tech in a fluorishing market of utmost importance.
Semantic Web: the first semantic web system (Metalog, the next-generation system to querying and reasoning for the semantic web)
"Semantic Web" is nowadays a common word for a boosting field. Back in time (1998), however, this was just a cool word for few intimates, gathering with the inventor of the Web (Tim Berners-Lee) in front of a whiteboard at MIT. During those times, the need to pass from a visionary idea to practice emerged, and this lead to the development of Metalog, afaik the first semantic web system to be produced in history. Metalog allows querying and reasoning for the Semantic Web, and more than that: to do it in a way that people can find easy and natural. Again, what just seemed visionary at the time is now a boom, and the Semantic Web is nowadays a huge emerging area, with lot of applications and papers being developed, towards the original vision of a third-generation Web of Information.

</compressed-past-bio>


Massimo Marchiori