Publications and Research
Document Type
Article
Publication Date
3-1-2011
Abstract
Privacy issues in social and business e-networks are daunting in complexity— private information about oneself might be routed through countless artificial agents. For each such agent, in that context, two questions about trust are raised: Where an agent must access (or store) personal information, can one trust that artificial agent with that information and, where an agent does not need to either access or store personal information, can one trust that agent not to either access or store that information? It would be an infeasible task for any human being to explicitly determine, for each artificial agent, whether it can be trusted. That is, no human being has the computational resources to make such an explicit determination. There is a well-known class of problems in the artificial intelligence literature, known as frame problems, where explicit solutions to them are computationally infeasible. Human common sense reasoning solves frame problems, though the mechanisms employed are largely unknown. I will argue that the trust relation between two agents (human or artificial) functions, in some respects, is a frame problem solution. That is, a problem is solved without the need for a computationally infeasible explicit solution. This is an aspect of the trust relation that has remained unexplored in the literature. Moreover, there is a formal, iterative structure to agent-agent trust interactions that serves to establish the trust relation non-circularly, to reinforce it, and to “bootstrap” its strength.
Comments
This article originally appeared in Information, available at DOI:10.3390/info2010195
© 2011 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).