We address the sampling bias and outlier is sues in few-shot learning for event detection, a subtask of information extraction. We propose to model the relation between training tasks in episodic few-shot learning by intro ducing cross-task prototypes. We further propose to enforce prediction consistency among classifiers across tasks to make the model more robust to outliers. Our extensive experiment shows a consistent improvement on three few shot learning datasets. The finding suggests that our model is more robust when labeled data of novel event types is limited.