1 Files for which a hash cannot be found should not be added to the DHT.
3 If the hash can't found, it stands to reason that other peers will not
4 be able to find the hash either. So adding those files to the DHT will
5 just clutter it with useless information. Examples include Release.gpg,
6 Release, Translation-de.bz2, and Contents.gz.
9 Packages.diff files need to be considered.
11 The Packages.diff/Index files contain hashes of Packages.diff/rred.gz
12 files, which themselves contain diffs to the Packages files previously
13 downloaded. Apt will request these files for the testing/unstable
14 distributions. They need to either be ignored, or dealt with properly by
15 adding them to the tracking done by the AptPackages module.
18 Hashes need to be sent with requests for some files.
20 Some files can change without changing the file name, since the file was
21 added to the DHT by the peer. Examples are Release, Packages.gz, and
22 Sources.bz2. For files like this (and only for files like this), the
23 request to download from the peer should include the downloader's
24 expected hash for the file as a new HTTP header. If the file is found,
25 the cached hash for the file will be used to determine whether the
26 request is for the same file as is currently available, and a special
27 HTTP response can be sent if it is not (i.e. not a 404).
29 Alternatively, consider sharing the files by hash instead of by
30 directory. Then the request would be for
31 http://127.3.45.9:9977/<urlencodedHash>, and it would always work. This
32 would require a database lookup for every request.
35 PeerManager needs to download large files from multiple peers.
37 The PeerManager currently chooses a peer at random from the list of
38 possible peers, and downloads the entire file from there. This needs to
39 change if both a) the file is large (more than 512 KB), and b) there are
40 multiple peers with the file. The PeerManager should then break up the
41 large file into multiple pieces of size < 512 KB, and then send requests
42 to multiple peers for these pieces.
44 This can cause a problem with hash checking the returned data, as hashes
45 for the pieces are not known. Any file that fails a hash check should be
46 downloaded again, with each piece being downloaded from different peers
47 than it was previously. The peers are shifted by 1, so that if a peers
48 previously downloaded piece i, it now downloads piece i+1, and the first
49 piece is downloaded by the previous downloader of the last piece, or
50 preferably a previously unused peer. As each piece is downloaded the
51 running hash of the file should be checked to determine the place at
52 which the file differs from the previous download.
54 If the hash check then passes, then the peer who originally provided the
55 bad piece can be assessed blame for the error. Otherwise, the peer who
56 originally provided the piece is probably at fault, since he is now
57 providing a later piece. This doesn't work if the differing piece is the
58 first piece, in which case it is downloaded from a 3rd peer, with
59 consensus revealing the misbehaving peer.
62 Consider storing torrent-like strings in the DHT.
64 Instead of only storing the file download location (which would still be
65 used for small files), a bencoded dictionary containing the peer's
66 hashes of the individual pieces could be stored for the larger files
67 (20% of all the files are larger than 512 KB ). This dictionary would
68 have the download location, a list of the piece sizes, and a list of the
69 piece hashes (bittorrent uses a single string of length 20*#pieces, but
70 for general non-sha1 case a list is needed).
72 These piece hashes could be compared ahead of time to determine which
73 peers have the same piece hashes (they all should), and then used during
74 the download to verify the downloaded pieces.
76 Alternatively, the peers could store the torrent-like string for large
77 files separately, and only contain a reference to it in their stored
78 value for the hash of the file. The reference would be a hash of the
79 bencoded dictionary, and a lookup of that hash in the DHT would give the
80 torrent-like string. (A 100 MB file would result in 200 hashes, which
81 would create a bencoded dictionary larger than 6000 bytes.)
84 PeerManager needs to track peers' properties.
86 The PeerManager needs to keep track of the observed properties of seen
87 peers, to help determine a selection criteria for choosing peers to
88 download from. Each property will give a value from 0 to 1. The relevant
91 - hash errors in last day (1 = 0, 0 = 3+)
92 - recent download speed (1 = fastest, 0 = 0)
93 - lag time from request to download (1 = 0, 0 = 15s+)
94 - number of pending requests (1 = 0, 0 = max (10))
95 - whether a connection is open (1 = yes, 0.9 = no)
97 These should be combined (multiplied) to provide a sort order for peers
98 available to download from, which can then be used to assign new
99 downloads to peers. Pieces should be downloaded from the best peers
100 first (i.e. piece 0 from the absolute best peer).
103 Missing Kademlia implementation details are needed.
105 The current implementation is missing some important features, mostly
106 focussed on storing values:
107 - values need to be republished (every hour?)
108 - original publishers need to republish values (every 24 hours)
109 - when a new node is found that is closer to some values, replicate the
110 values there without deleting them
111 - when a value lookup succeeds, store the value in the closest node
112 found that didn't have it
113 - make the expiration time of a value exponentially inversely
114 proportional to the number of nodes between the current node and the
115 node closest to the value