Abstract—A model’s interpretability is essential to many practical applications such as clinical decision support systems. In this paper, a novel interpretable machine learning method is presented, which can model the relationship between input variables and responses in humanly understandable rules. The method is built by applying tropical geometry to fuzzy inference systems, wherein variable encoding functions and salient rules can be discovered by supervised learning. Experiments using synthetic datasets were conducted to demonstrate the performance and capacity of the proposed algorithm in classification and rule discovery. Furthermore, we present a pilot application in identifying heart failure patients that are eligible for advanced therapies as proof of principle. From our results on this particular application, the proposed network achieves the highest F1 score. The network is capable of learning rules that can be interpreted and used by clinical providers. In addition, existing fuzzy domain knowledge can be easily transferred into the network and facilitate model training. In our application, with the existing knowledge, the F1 score was improved by over 5%. The characteristics of the proposed network make it promising in applications requiring model reliability and justification.