Implementing and Evaluating sparsification methods in probabilistic networks
Abstract: Most queries on probabilistic networks assume a possible world semantic, which causes an exponential increase in execution time. Deterministic networks can apply sparsification methods to reduce their sizes while preserving some structural properties, but there have not been any equivalent methods for probabilistic networks until recently. As a first work in the field, Parchas, Papailiou, Papadias and Bonchi have proposed sparsification methods for probabilistic networks by adapting a gradient descent and expectation-maximization algorithm. In this report the two proposed algorithms, Gradient Descent Backbone (GDB) and Expectation-Maximization Degree (EMD), were implemented and evaluated on different input parameters by comparing how well the general graph properties, expected vertex degrees and ego betweenness approximations are preserved after sparsifying different datasets. In the sparsified networks we found that the entropies had mostly gone down to zero, effectively creating a deterministic network. EMD generally showed better results than GDB specifically when using the relative discrepancies, however on lower alpha values the EMD methods can generate disconnected networks, more so when using absolute discrepancies. The methods produced unexpected results on higher alpha values which suggests they're not stable. Our evaluations have confirmed that the proposed algorithms produce acceptable results in some cases, however finding the right input parameters for specific networks can be time consuming. Therefore further testing on diverse structures of networks with different input parameters is recommended.Tryckt av:
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)