C-Sharp | Java | Python | Swift | GO | WPF | Ruby | Scala | F# | JavaScript | SQL | PHP | Angular | HTML
However: Computing the hash value has some overhead. Also, there is substantial overhead in constructing Dictionaries in the first place.
Example: Whenever you add items to the Dictionary, the hash keys must be computed and the values must be stored in the buckets.
Note: This test used random path names ("dliu3ms0.idt") generated by Path.GetRandomFileName, an easy way to get random strings.
Dictionary code that was benchmarked: C#
// 1. Get random number.
int n = r.Next(m);
// 2. Get random string.
string k = l[n];
// 3. See if it exists.
bool hit = false;
if (d.ContainsKey(k))
{
hit = true;
}
List code that was benchmarked: C#
// 1. Get random number.
int n = r.Next(m);
// 2. Get random string.
string k = l[n];
// Loop through strings to see if it exists.
bool hit = false;
foreach (string s2 in l2)
{
if (s2 == k)
{
hit = true;
break;
}
}
Result: My test showed that when you exceed three items, the Dictionary lookup is faster. You can see a graph above.
Number, Dictionary ms, List ms
1 655 453
2 702 577
3 702 670
4 655 749
5 686 811
6 702 874
7 687 936
8 702 1014
9 702 1077
10 702 1108
11 687 1170
12 718 1232
Also: When uniqueness is important, Dictionary or Hashtable can automatically check for duplicates. This can lead to simpler code.
Thus: I use three elements as the threshold when I will switch to Dictionary lookups from List loops.
DictionaryList