h1

Reviewing in Machine Learning

June 11, 2017

A common subject for mutual commiseration in the community is the quality of reviewing.  In huge and specialised conferences like NIPS and ICML, there are so many papers and so many reviewers that generally the matchup between reviewers and papers is quite good, almost as good as for a journal article.  In smaller conferences, like ACML, and for grant applications in relatively small places like Australia (e.g, the ARC), the matchup can be a lot poorer.

Of course, one needs to be aware of The Great NIPS Reviewing Experiment of 2014.  This is a grand applied statistical experiment that only machine learning folks could think of 😉  I’ll just mention this because it is important to understand that the reviewing process is very challenging, and we as a community are trying our hardest.

Now, I think its very reasonable for some reviewers to not be specialists in the subject of the paper, merely “knowledgable”.  After all, we would like the paper to be readable by more than just the 20 people who focus on that very specific topic.  These non-specialist reviewers generally flag themselves, so the meta-reviewers and authors know to take their comments with a (respectful) grain of salt.  But they can still be excellent in related and broadly applicable areas like experimental methodology and mathematical definitions of models, so they are still an important part of the reviewing ecosystem.

But I still find general aspects of reviewing enlightening.

Case in point is our recent ICML 2017 paper “Leveraging Node Attributes for Incomplete Relational Data”.   Two reviewers said strong accept and one a mild reject.  For the would be rejecter, the method was too simple.  We knew this paper was not full of the usual theoretical complexities expected of an ICML one, of course, so we made sure the experimental work was rock solid.  It was a risk submitting to ICML anyway, as anyone with experience knows the experimental work at ICML can be patchy, its not something generally looked for by reviewers.  If you want quality experimental work in machine learning, go to the knowledge discovery conferences like SIGKDD, certainly not a machine learning conference!

The reason we submitted the paper to ICML was because this simple method beat all previous work handily, either or both in predictive performance or speed.  Simplicity it seems has its advantages, and people should find out about it when it happens.   But, if it was so damn simple, why didn’t someone try it already (in truth, it wasn’t that simple), and given it works so much better, shouldn’t people find out that for this problem all the ICML-ish model complexity of previous methods was unnecessary 😉 .  Now we did add a tricky hierarchical part to our otherwise simple model, just to appease the “meta is better” crowd, and we’re now busy trying to figure out how to add a novel stochastic process (something I love to do).

But unnecessary complexity is something I’m not a big fan of.  My favorite example of this is papers starting off with 2 pages of stochastic process theory before, finally, getting to the model and implementation.  But the model they implementation is a “truncated” one so it is completely finite.  There is no use whatsoever of stochastic process theory in the actual algorithmic detail or key theory of the model.   In a longer journal format, linking the truncated version with the full stochastic process theory is important to round off the treatment, however.

As for ARC reviewing, when a reviewer says something is not possible or a poor choice, we take great pride in publishing a paper soon after proving them wrong.

h1

ICML 2017 paper: Leveraging Node Attributes for Incomplete Relational Data

May 19, 2017

Here is a paper with Ethan Zhao and Lan Du, both of Monash, we’ll present in Sydney.

Relational data are usually highly incomplete in practice, which inspires us to leverage side information to improve the performance of community detection and link prediction. This paper presents a Bayesian probabilistic approach that incorporates various kinds of node attributes encoded in binary form in relational models with Poisson likelihood. Our method works flexibly with both directed and undirected relational networks. The inference can be done by efficient Gibbs sampling which leverages sparsity of both networks and node attributes. Extensive experiments show that our models achieve the state-of-the-art link prediction results, especially with highly incomplete relational data.

As usual, the reviews where entertaining, and some interesting results we didn’t get in the paper.  Its always enlightening doing comparative experiments.

h1

ALTA 2016 Tutorial: Simpler Non-parametric Bayesian Models

April 21, 2017

They recorded my tutorial ran at ALTA late in 2016.

Part 1 and part 2 up on Youtube, about an hour each.

 

h1

Visiting SMU in Singapore

November 23, 2016

Currently visiting the LARC with Steven Hoi and will give a talk tomorrow on my latest work based on Lancelot James’ extensive theories of Bayesian non-parametrics.

Simplifying Poisson Processes for Non-parametric Methods

Main thing is I’ve been looking at hierarchical models, which have all their own theory, though easily developed using James’ general results.  Not sure I’ll put the slides up because its not yet published!

h1

On the “world’s best tweet clusterer” and the hierarchical Pitman–Yor process

July 30, 2016

Kar Wai Lim has just been told they “confirmed the approval” of his PhD (though it hasn’t been “conferred” yet, so he’s not officially a Dr., yet) and he spent the time post submission pumping out journal and conference papers.  Ahhh, the unencumbered life of the fresh PhD!

This one:

“Nonparametric Bayesian topic modelling with the hierarchical Pitman–Yor processes”, Kar Wai Lim , Wray Buntine, Changyou Chen, Lan Du, International Journal of Approximate Reasoning78 (2016) 172–191.

includes what I believe is the world’s best tweet clusterer.  Certainly blows away the state of the art tweet pooling methods.  Main issue is that the current implementation only scales to a million or so tweets, and not the 100 million or expected in some communities.  Easily addressed with a bit of coding work.

We did this to demonstrate the rich possibilities in terms of semantic hierarchies one has, largely unexplored, using simple Gibbs sampling with Pitman-Yor processes.   Lan Du (Monash) started this branch of research.  I challenge anyone to do this particular model with variational algorithms 😉   The machine learning community in the last decade unfortunately got lost on the complexities of Chinese restaurant processes and stick-breaking representations for which complex semantic hierarchies are, well, a bit of a headache!

h1

Is C that bad of a programming language?

May 2, 2016

The following quote about C comes from a Quora answer to the above question:

I don’t think C gets enough credit. Sure, C doesn’t love you. C isn’t about love–C is about thrills. C hangs around in the bad part of town. C knows all the gang signs. C has a motorcycle, and wears the leathers everywhere, and never wears a helmet, because that would mess up C’s punked-out hair. C likes to give cops the finger and grin and speed away. Mention that you’d like something, and C will pretend to ignore you; the next day, C will bring you one, no questions asked, and toss it to you with a you-know-you-want-me smirk that makes your heart race. Where did C get it? “It fell off a truck,” C says, putting away the boltcutters. You start to feel like C doesn’t know the meaning of “private” or “protected”: what C wants, C takes. This excites you. C knows how to get you anything but safety. C will give you anything but commitment. In the end, you’ll leave C, not because you want something better, but because you can’t handle the intensity. C says “I’m gonna live fast, die young, and leave a good-looking corpse,” but you know that C can never die, not so long as C is still the fastest thing on the road.

I love it.  Still do most of my programming in C using a mix of vi, emacs, gdb, valgrind and all that good old stuff, resorting to Python/Perl scripts sometimes for automation. Know I should be using a proper development UI and loading up my code with bulky libraries like Boost, and using complex install systems like Cmake and Autoconf, hell, why not even Imake (done all these in the past).  I should also be using great inventions like multiple inheritance, operator overloading and recursive templates, but I find C’s simple approach to memory handling and functions just a lot safer.

Most of my students use Java though.  Automatic garbage collection and the UI seem to be what they like, as well as the loads of good code to work on out there.

h1

A small model of ArXiv abstracts

April 12, 2016

I’ve been working with Dorota Glowacka of University of Helsinki on a search system built on the ArXiv.org.  We have a demo appearing in SIGIR 2016.  Here is a model with 100 topics built using normalised gamma priors on topics (giving each topic a variance parameter as well as a mean parameter) on the 1.1 million abstracts to March 2016.  The model took about 6 hours to run on my desktop.

This is a huge PNG file (2.8Mb).  YOU will not be able to view it unless you:

  • so load up on a big screen,
  • click on the image to enter image view mode,
  • then scroll down to bottom right click on “View full size” to bring it up,
  • and then zoom around to view.