Through our research, we have found that not a lot of research has been done on our particular case, that is, a Guilt Machine where everyone is potentially lying and the killer is among them, and the Machine has to determine from these statements who is the killer.
We initially attempted to research strategies for the games Clue and Mafia, since we assumed our Guilt Machine would be similar to either a townsperson in a Mafia game or a random player in a Clue game. However, we’ve since realized that we’re missing one important aspect from both these games that prevents these strategies from applying: multiple rounds. Unlike in the social deduction games, our program only has one ‘round’ of information gathering from everyone else. This means that we are unable to do any of the cross-round elimination that’s integral to those strategies.
We then did research into how humans detect if someone is lying. Much of it is based on body language—for example, according to an article from Time Magazine, people who are lying tend to blink and fidget less and hesitate longer before speaking—but some of it is based on personal characteristics. That same article mentioned how the more intelligent and/or creative a person is, the more likely they are to lie. Conversely, the shorter and less detailed a story is, the more likely it is to be false (Barker). This means that we could theoretically have our Guilt Machine take into account the amount of terms in the testimony as an indicator of believability.
The reason for all of these symptoms of lying is because lying takes a larger cognitive load on someone than telling the truth. Therefore, many strategies involved in determining if someone is lying involve increasing that mental load, for example by asking them to repeat their story in reverse or by asking them for more details (Barker).
One article entitled Distributed Belief Revision as Applied Within a Descriptive Model of Jury Deliberations established many requirements and solutions for a project similar, albeit much more complex, than ours. The article specifically lays out a general approach to belief revision, which begins with its first principle; consistency. This means that revision must yield a consistent knowledge space. The next principle states that when revising, the change to the knowledge space must alter as little as possible, this means that we cannot remove anymore facts from our knowledge base than what is the absolute minimum. Finally, incoming information must always belong to the revised knowledge space (Dragoni).
The article also listed requirements for a belief revision framework. Some of the requirements this article lists are as follows; the framework must have the ability to reject incoming information and it must be able to recover previously discarded beliefs. The article goes on to explain how a system must be able to deal with coupled information rather than just the information alone, which means the algorithm must consider the source of the information alongside the information itself. Finally, the article describes how a system must be able to combine contradictory and concomitant (things that go together) evidence (Dragoni). In the context of our project, this would just mean that new information must affect the old, as well as the old information must affect the new.
Another relevant detail that we had to take into account were the circumstances that may cause somebody to commit murder, which would allow us to build relationships and determine motives for suspects commiting the crime, as well as motives to frame other suspects. Motives are typically a key aspect of murder investigations. Most reasonable people don’t commit murder for no reason. Motives can often be really complex but one article lays it out really well. In general murders can be broken down into 4 “L”s, love, lust, loathing, and loot (Morrall). We broke down our possible motives into these categories. Our prolog facts of “married” and “have relationship” apply to the love and lust category at different levels and possibly loathing as well if it includes an affair. Our “owes_money” fact can represent loot and our “threatened” fact can represent loathing.
Our strategy pertaining to motives is to see which relationships would make somebody more likely to be the killer than somebody else, and which relationships would possibly cause somebody to fake a testimony to frame another specific suspect. In the end the motive can not determine guilt, but it can definitely influence the credibility of a suspect and the skepticism that we should have towards their testimony.
We also took into consideration that perhaps the suspect wasn’t reasonable. In addition to the 4 “L's someone might kill somebody because they are a psychopath of some sort (Morrall). In our case we created the fact “arsonist” to represent this.