Link analysis

In network theory, link analysis is a data-analysis technique used to evaluate relationships (connections) between nodes. Relationships may be identified among various types of nodes (objects), including organizations, people and transactions. Link analysis has been used for investigation of criminal activity (fraud detection, counterterrorism, and intelligence), computer security analysis, search engine optimization, market research and medical research.

Contents

Knowledge discovery

Knowledge discovery is an iterative and interactive process used to identify, analyze and visualize patterns in data.[1] Network analysis, link analysis and social network analysis are all methods of knowledge discovery, each a corresponding subset of the prior method. Most knowledge discovery methods follow these steps (at the highest level):[2]

  1. Data processing
  2. Transformation
  3. Analysis
  4. Visualization

Data gathering and processing requires access to data and has several inherent issues, including information overload and data errors. Once data is collected, it will need to be transformed into a format that can be effectively used by both human and computer analyzers. Manual or computer-generated visualizations tools may be mapped from the data, including network charts. Several algorithms exist to help with analysis of data – Dijkstra’s algorithm, breadth-first search, and depth-first search.

Link analysis focuses on analysis of relationships among nodes through visualization methods (network charts, association matrix). Here is an example of the relationships that may be mapped for crime investigations:[3]

Relationship/Network Data Sources
1. Trust Prior contacts in family, neighborhood, school, military, club or organization. Public and court records. Data may only be available in suspect's native country.
2. Task Logs and records of phone calls, electronic mail, chat rooms, instant messages, Web site visits. Travel records. Human intelligence: observation of meetings and attendance at common events.
3. Money & Resources Bank account and money transfer records. Pattern and location of credit card use. Prior court records. Human intelligence: observation of visits to alternate banking resources such as Hawala.
4. Strategy & Goals Web sites. Videos and encrypted disks delivered by courier. Travel records. Human intelligence: observation of meetings and attendance at common events.

Link analysis is used for 3 primary purposes:[4]

  1. Find matches in data for known patterns of interest;
  2. Find anomalies where known patterns are violated;
  3. Discover new patterns of interest (social network analysis, data mining).

History

Klerks categorized link analysis tools into 3 generations.[5] The first generation was introduced in 1975 as the Anacpapa Chart of Harper and Harris.[6] This method requires that a domain expert review data files, identify associations by constructing an association matrix, create a link chart for visualization and finally analyze the network chart to identify patterns of interest. This method requires extensive domain knowledge and is extremely time-consuming when reviewing vast amounts of data.

Second generation tools consist of automatic graphics-based analysis tools such as Analyst’s Notebook, Netmap and Watson. These tools offer the ability to automate the construction and updates of the link chart once an association matrix is manually created, however, analysis of the resulting charts and graphs still requires an expert with extensive domain knowledge.

Third generation tools have not yet been produced; this generation is expected to move beyond drawing of links to interpreting links.

Applications

Issues with link analysis

Information overload

With the vast amounts of data and information that are stored electronically, users are confronted with multiple unrelated sources of information available for analysis. Data analysis techniques are required to make effective and efficient use of the data. Palshikar classifies data analysis techniques into two categories – statistical (models, time-series analysis, clustering and classification, matching algorithms to detect anomalies) and artificial intelligence (AI) techniques (data mining, expert systems, pattern recognition, machine learning techniques, neural networks).[7]

Bolton & Hand define statistical data analysis as either supervised or unsupervised methods.[8] Supervised learning methods require that rules are defined within the system to establish what is expected or unexpected behavior. Unsupervised learning methods review data in comparison to the norm and detect statistical outliers. Supervised learning methods are limited in the scenarios that can be handled as this method requires that training rules are established based on previous patterns. Unsupervised learning methods can provide detection of broader issues, however, may result in a higher false-positive ratio if the behavioral norm is not well established or understood.

Data itself has inherent issues including integrity (or lack of) and continuous changes. Data may contain “errors of omission and commission because of faulty collection or handling, and when entities are actively attempting to deceive and/or conceal their actions”.[4] Sparrow[9] highlights incompleteness (inevitability of missing data or links), fuzzy boundaries (subjectivity in deciding what to include) and dynamic changes (recognition that data is ever-changing) as the three primary problems with data analysis.[3]

Once data is transformed into a usable format, open texture and cross referencing issues may arise. Open texture was defined by Waismann as the unavoidable uncertainty in meaning when empirical terms are used in different contexts.[10] Uncertainty in meaning of terms presents problems when attempting to search and cross reference data from multiple sources.[11]

The primary method for resolving data analysis issues is reliance on domain knowledge from an expert. This is a very time-consuming and costly method of conducting link analysis and has inherent problems of its own. McGrath et al conclude that the layout and presentation of a network diagram have a significant impact on the user’s “perceptions of the existence of groups in networks”.[12] Even using domain experts may result in differing conclusions as analysis may be subjective.

Prosecution vs. crime prevention

Link analysis techniques have primarily been used for prosecution, as it is far easier to review historical data for patterns than it is to attempt to predict future actions.

Krebs demonstrated the use of an association matrix and link chart of the terrorist network associated with the 19 hijackers responsible for the September 11th attacks by mapping publicly available details made available following the attacks.[3] Even with the advantages of hindsight and publicly available information on people, places and transactions, it is clear that there is missing data.

Alternatively, Picarelli argued that use of link analysis techniques could have been used to identify and potentially prevent illicit activities within the Aum Shinrikyo network.[13] “We must be careful of ‘guilt by association’. Being linked to a terrorist does not prove guilt – but it does invite investigation.”[3] Balancing the legal concepts of probable cause, right to privacy and freedom of association become challenging when reviewing potentially sensitive data with the objective to prevent crime or illegal activity that has not yet occurred.

Proposed solutions

There are four categories of proposed link analysis solutions:[14]

  1. Heuristic-based
  2. Template-based
  3. Similarity-based
  4. Statistical

Heuristic-based tools utilize decision rules that are distilled from expert knowledge using structured data. Template-based tools employ Natural Language Processing (NLP) to extract details from unstructured data that are matched to pre-defined templates. Similarity-based approaches use weighted scoring to compare attributes and identify potential links. Statistical approaches identify potential links based on lexical statistics.

CrimeNet explorer

J.J. Xu and H. Chen propose a framework for automated network analysis and visualization called CrimeNet Explorer.[15] This framework includes the following elements:

References

  1. ^ Tor Project
  2. ^ Ahonen, H., Features of Knowledge Discovery Systems.
  3. ^ a b c d Krebs, V. E. 2001, Mapping networks of terrorist cells, Connections 24, 43–52.
  4. ^ Klerks, P. 2001, The network paradigm applied to criminal organizations: Theoretical nitpicking or a relevant doctrine for investigators? Recent developments in the Netherlands, Connections 24, 53–65.
  5. ^ Harper and Harris, The Analysis of Criminal Intelligence, Human Factors and Ergonomics Society Annual Meeting Proceedings, 19(2), 1975, pp. 232-238.
  6. ^ Palshikar, G. K., The Hidden Truth, Intelligent Enterprise, May 2002.
  7. ^ Bolton, R. J. & Hand, D. J., Statistical Fraud Detection: A Review, Statistical Science, 2002, 17(3), pp. 235-255.
  8. ^ Sparrow M.K. 1991. Network Vulnerabilities and Strategic Intelligence in Law Enforcement’, International Journal of Intelligence and Counterintelligence Vol. 5 #3.
  9. ^ Friedrich Waismann, Verifiability (1945), p.2.
  10. ^ Lyons, D., Open Texture and the Possibility of Legal Interpretation (2000).
  11. ^ McGrath, C., Blythe, J., Krackhardt, D., Seeing Groups in Graph Layouts.
  12. ^ Picarelli, J. T., Transnational Threat Indications and Warning: The Utility of Network Analysis, Military and Intelligence Analysis Group.
  13. ^ Schroeder et al., Automated Criminal Link Analysis Based on Domain Knowledge, Journal of the American Society for Information Science and Technology, 58:6 (842), 2007.
  14. ^ a b c d Xu, J.J. & Chen, H., CrimeNet Explorer: A Framework for Criminal Network Knowledge Discovery, ACM Transactions on Information Systems, 23(2), April 2005, pp. 201-226.

External links