Analyzing human reactions from text is an important step towards automated modeling of affective content. The variance in human perceptions and experiences leads to a lack of uniform, well-labeled, ground-truth datasets, hence, limiting the scope of neural supervised learning approaches. Recurrent and convolutional networks are popular for text classification and generation tasks, specifically, where large datasets are available; but are inefficient when dealing with unlabeled corpora. We propose a gated sequence-to-sequence, convolutional-deconvolutional autoencoding (GCNN-DCNN) framework for affect classification with limited labeled data. We show that compared to a vanilla CNN-DCNN network, gated networks improve performance for affect prediction as well as text reconstruction. We present a regression analysis comparing outputs of traditional learning models with information captured by hidden variables in the proposed network. Quantitative evaluation with joint, pre-trained networks, augmented with psycholinguistic features, reports highest accuracies for affect prediction, namely frustration, formality, and politeness in text.