We address the problem of designing practical methods for natural language semantic matching. While current deep neural networks can achieve high accuracy on benchmark datasets, their model structures are typically rigidly defined and as a result often encounter generalization issues and cannot easily adapt to various training data limitations. In this paper, we propose a Markov Network Model that leverages linguistic independence assumptions to decompose the problem into subproblems that can be solved independently for both generalization and the ability to adapt to various amounts of training data. The proposed framework is orthogonal to deep neural networks, since they can be integrated to the framework in the form of potential functions on the cliques corresponding to smaller sub-problems. Experiments on diverse real-world datasets show our method outperform current state-of-the-art both in terms of accuracy, generalization and the ability to perform under various training data limitations.