Machine Learning/Kaggle Social Network Contest

From Noisebridge
< Machine Learning(Difference between revisions)
Jump to: navigation, search
(Official Data Downloads: adding status section - to track progress and provide direction)
Line 17: Line 17:
!|Load data
!|Load data
| 100%
| started
| -
| -
| [[/load data| Load data]]
| [[/load data| Load data]]
!|Choose problem representation
!|Choose problem representation
| 5%
| started
| -
| -
| [[/Problem Representation| Problem Representation]]
| [[/Problem Representation| Problem Representation]]

Revision as of 22:41, 18 November 2010


Official Contest Links

Official Data Downloads


Tasks Status target date subpage
Load data started - Load data
Choose problem representation started - Problem Representation
Generate candidate features 0% 11/24 Features
fit to model 0% 11/24 Model
Win competition 0% 11/24 Prize Plan

Key Contest Info

The data has been downloaded using the API of a social network. There are 7.2m contacts/edges of 38k users/nodes. These have been drawn randomly ensuring a certain level of closedness.

You are given 7,237,983 contacts/edges from a social network ( The first column is the outbound node and the second column is the inbound node. The ids have been encoded so that the users are anonymous. Ids reach from 1 to 1,133,547.

There are 37,689 outbound nodes and 1,133,518 inbound nodes. Most outbound nodes are also inbound nodes so that the total number of unique nodes is 1,133,547.

The way the contacts were sampled makes sure that the universe is roughly closed. Note that not every relationship is mutual.

The test dataset contains 8,960 edges from 8,960 unique outbound nodes (social_test.csv). Of those 4,480 are true and 4,480 are false edges. You are tasked to predict which are true (1) and which are false (0). You need to supply back a file with outbound node id,inbound node id,[0,1] in each row. This means you can assign a probability of being true to an edge. You are being scored on the AUC. A random model will have an AUC of 0.5, so you need to try to do better than that (ie have a higher AUC). Your entry should conform to the format in sample_submission.csv.

You are encouraged to explore techniques which explain the social network/graph. The best entrant should try to explain his approach/method to other users.

Don’t despair if your first couple of solutions score low, this is an explorative process.

Brainstorming on Process

  • We shouldn't have a single approach to solving the problem. If people have ideas they should run with them and report back their success/failure to the group. The collaboration between our diverse ideas/approaches/experiences will be our strength in working together.
  • Since this is throw away code for this competition only, we need not get hung up on efficiency or elegant implementations. That said, if we hit a point where our code is not able to perform fast enough then we can address it at that point, instead of overengineering from the get-go.
  • Theo suggested that we start by using things like python/ruby scripts to massage the starting data set into something more useful (with more features), then analyse and visualize that using things like R.
  • Jared was wondering if people think it's legit to use the mailing list for discussion or if we should create a discussion list for the competition to prevent from spamming the main list with competition collboration? (Update: Maybe we can use wiki instead?)
  • Also, as we transform the dataset into different views, we are going to end up with some large files that we will be passing around to each other. Any suggestions on how to best do that? Jared has been using Dropbox (see dumps below).

Brainstorming on Strategy

  • The dataset forms a graph of directed edges between vertices. At the core of this problem will performing analysis on that graph. The first intuitive approach we had come to mind was that the shorter the distance between two vertices using existing edges, the more likely it would be that an edge could/should exist between those vertices.
  • After the talk, Erin, Theo, and jared stumbled on the idea that some vertices might be uber-followers (meaning more outbound edges than the average vertex) and that some vertices might be uber-followees (meaning more inbound edges than average). This reminded us of PageRank for link graphs, so perhaps we can draw from techniques in that vein. The application of this in our problem, might be in weighting. For example, people who follow lots of people might be more likely to follow someone further out in their "network", where someone who doesn't follow many people might less likely to follow someone outside their "network".
  • Since the edges are directional, we know that it's possible for people to "follow" someone with out that person "following back". At first glance it might make sense that the reverse edges would be likely in cases like this. However consider a "hub" user with lots of followers who doesn't reciprocate with edges back to his followers, then the information of who follows him is less important in determining who he would follow. Conversely, for a user who commonly reciprocates with followbacks, then the information on who follows her might be useful in suggesting who she follow.

Useful Links

Working Data Dumps

  • Adjacency list based from the training data: 
  First column: outbound vertex
  Remaining columns: list of vertices to which it points
  Note: Useful when loaded up as a hashtable keyed on outbound vertex returning the list.
  • Adjacency list of the reversed Graph:
  First column: inbound vertex
  Remaining columns: list of vertices which point to it
  Note: This is useful if interested in following the edges backwards quickly. This is useful to load as a hashtable keyed on inbound vertex returning the list.

Possible Features

  • nodeid
  • nodetofollowid
  • median path length
  • shortest distance from nodeid to nodetofollowid
  • inbound edges
  • outbound edges
  • clustering coefficient
  • reciprocation probability (num of edges returned / num of outbound edges)

The response variable is the probability that the nodeid to nodetofollowid edge will be created in the future

Personal tools