X-Git-Url: https://git.mxchange.org/?p=quix0rs-apt-p2p.git;a=blobdiff_plain;f=TODO;h=bd8283c904d6ed372c9a84136ce204422bb6e183;hp=2ff301ac1065758b7d606da98e2ce582e1df02d9;hb=1e5de6cdc6ce1df1ca985e45d15c4e10931aaf38;hpb=8a9e9644183186c3c88da0b995bb49059408cc75 diff --git a/TODO b/TODO index 2ff301a..bd8283c 100644 --- a/TODO +++ b/TODO @@ -1,9 +1,51 @@ -Files for which a hash cannot be found should not be added to the DHT. - -If the hash can't found, it stands to reason that other peers will not -be able to find the hash either. So adding those files to the DHT will -just clutter it with useless information. Examples include Release.gpg, -Release, Translation-de.bz2, and Contents.gz. +Rotate DNS entries for mirrors more reliably. + +Currently the mirrors are accessed by DNS name, which can cause some +issues when there are mirror differences and the DNS gets rotated. +Instead, the HTTP Downloader should handle DNS lookups itself, store +the resulting addresses, and send requests to IP addresses. If there +is an error from the mirror (hash check or 404 response), the next IP +address in the rotation should be used. + + +Use GPG signatures as a hash for files. + +A detached GPG signature, such as is found in Release.gpg, can be used +as a hash for the file. This hash can be used to verify the file when +it is downloaded, and a shortened version can be added to the DHT to +look up peers for the file. To get the hash into a binary form from +the ascii-armored detached file, use the command +'gpg --no-options --no-default-keyring --output - --dearmor -'. The +hash should be stored as the reverse of the resulting binary string, +as the bytes at the beginning are headers that are the same for most +signatures. That way the shortened hash stored in the DHT will have a +better chance of being unique and being stored on different peers. To +verify a file, first the binary hash must be re-reversed, armored, and +written to a temporary file with the command +'gpg --no-options --no-default-keyring --output $tempfile --enarmor -'. +Then the incoming file can be verified with the command +'gpg --no-options --no-default-keyring --keyring /etc/apt/trusted.gpg +--verify $tempfile -'. + +All communication with the command-line gpg should be done using pipes +and the python module python-gnupginterface. There needs to be a new +module for GPG verification and hashing, which will make this easier. +In particular, it would need to support hashlib-like functionality +such as new(), update(), and digest(). Note that the verification +would not involve signing the file again and comparing the signatures, +as this is not possible. Instead, the verify() function would have to +behave differently for GPG hashes, and check that the verification +resulted in a VALIDSIG. CAUTION: the detached signature can have a +variable length, though it seems to be usually 65 bytes, 64 bytes has +also been observed. + + +Consider what happens when multiple requests for a file are received. + +When another request comes in for a file already being downloaded, +the new request should wait for the old one to finish. This should +also be done for multiple requests for peer downloads of files with +the same hash. Packages.diff files need to be considered. @@ -11,101 +53,83 @@ Packages.diff files need to be considered. The Packages.diff/Index files contain hashes of Packages.diff/rred.gz files, which themselves contain diffs to the Packages files previously downloaded. Apt will request these files for the testing/unstable -distributions. They need to either be ignored, or dealt with properly by +distributions. They need to be dealt with properly by adding them to the tracking done by the AptPackages module. -Change file identifier from path to hash. - -Some files can change without changing the path, since the file was -added to the DHT by the peer. Examples are Release, Packages.gz, and -Sources.bz2. This would cause problems when requesting these files by -path. Instead, share the files by hash, then the request would be for -http://127.3.45.9:9977/~, and it would always work. This -will require a database lookup for every request. - - -PeerManager needs to download large files from multiple peers. - -The PeerManager currently chooses a peer at random from the list of -possible peers, and downloads the entire file from there. This needs to -change if both a) the file is large (more than 512 KB), and b) there are -multiple peers with the file. The PeerManager should then break up the -large file into multiple pieces of size < 512 KB, and then send requests -to multiple peers for these pieces. - -This can cause a problem with hash checking the returned data, as hashes -for the pieces are not known. Any file that fails a hash check should be -downloaded again, with each piece being downloaded from different peers -than it was previously. The peers are shifted by 1, so that if a peers -previously downloaded piece i, it now downloads piece i+1, and the first -piece is downloaded by the previous downloader of the last piece, or -preferably a previously unused peer. As each piece is downloaded the -running hash of the file should be checked to determine the place at -which the file differs from the previous download. - -If the hash check then passes, then the peer who originally provided the -bad piece can be assessed blame for the error. Otherwise, the peer who -originally provided the piece is probably at fault, since he is now -providing a later piece. This doesn't work if the differing piece is the -first piece, in which case it is downloaded from a 3rd peer, with -consensus revealing the misbehaving peer. - - -Store and share torrent-like strings for large files. - -In addition to storing the file download location (which would still be -used for small files), a bencoded dictionary containing the peer's -hashes of the individual pieces could be stored for the larger files -(20% of all the files are larger than 512 KB). This dictionary would -have the normal piece size, the hash length, and a string containing the -piece hashes of length *<#pieces>. These piece hashes could -be compared ahead of time to determine which peers have the same piece -hashes (they all should), and then used during the download to verify -the downloaded pieces. - -For very large files (5 or more pieces), the torrent strings are too -long to store in the DHT and retrieve (a single UDP packet should be -less than 1472 bytes to avoid fragmentation). Instead, the peers should -store the torrent-like string for large files separately, and only -contain a reference to it in their stored value for the hash of the -file. The reference would be a hash of the bencoded dictionary. If the -torrent-like string is short enough to store in the DHT (i.e. less than -1472 bytes, or about 70 pieces for the SHA1 hash), then a -lookup of that hash in the DHT would give the torrent-like string. -Otherwise, a request to the peer for the hash (just like files are -downloaded), should return the bencoded torrent-like string. - - -PeerManager needs to track peers' properties. - -The PeerManager needs to keep track of the observed properties of seen -peers, to help determine a selection criteria for choosing peers to -download from. Each property will give a value from 0 to 1. The relevant -properties are: - - - hash errors in last day (1 = 0, 0 = 3+) - - recent download speed (1 = fastest, 0 = 0) - - lag time from request to download (1 = 0, 0 = 15s+) - - number of pending requests (1 = 0, 0 = max (10)) - - whether a connection is open (1 = yes, 0.9 = no) - -These should be combined (multiplied) to provide a sort order for peers -available to download from, which can then be used to assign new -downloads to peers. Pieces should be downloaded from the best peers -first (i.e. piece 0 from the absolute best peer). - - -Missing Kademlia implementation details are needed. - -The current implementation is missing some important features, mostly -focussed on storing values: - - values need to be republished (every hour?) - - original publishers need to republish values (every 24 hours) - - when a new node is found that is closer to some values, replicate the - values there without deleting them - - when a value lookup succeeds, store the value in the closest node - found that didn't have it - - make the expiration time of a value exponentially inversely - proportional to the number of nodes between the current node and the - node closest to the value +Improve the estimation of the total number of nodes + +The current total nodes estimation is based on the number of buckets. +A better way is to look at the average inter-node spacing for the K +closest nodes after a find_node/value completes. Be sure to measure +the inter-node spacing in log2 space to dampen any ill effects. This +can be used in the formula: + nodes = 2^160 / 2^(average of log2 spacing) +The average should also be saved using an exponentially weighted +moving average (of the log2 distance) over separate find_node/value +actions to get a better calculation over time. + + +Improve the downloaded and uploaded data measurements. + +There are 2 places that this data is measured: for statistics, and for +limiting the upload bandwidth. They both have deficiencies as they +sometimes miss the headers or the requests sent out. The upload +bandwidth calculation only considers the stream in the upload and not +the headers sent, and it also doesn't consider the upload bandwidth +from requesting downloads from peers (though that may be a good thing). +The statistics calculations for downloads include the headers of +downloaded files, but not the requests received from peers for upload +files. The statistics for uploaded data only includes the files sent +and not the headers, and also misses the requests for downloads sent to +other peers. + + +Rehash changed files instead of removing them. + +When the modification time of a file changes but the size does not, +the file could be rehased to verify it is the same instead of +automatically removing it. The DB would have to be modified to return +deferred's for a lot of its functions. + + +Consider storing deltas of packages. + +Instead of downloading full package files when a previous version of +the same package is available, peers could request a delta of the +package to the previous version. This would only be done if the delta +is significantly (>50%) smaller than the full package, and is not too +large (absolutely). A peer that has a new package and an old one would +add a list of deltas for the package to the value stored in the DHT. +The delta information would specify the old version (by hash), the +size of the delta, and the hash of the delta. A peer that has the same +old package could then download the delta from the peer by requesting +the hash of the delta. Alternatively, very small deltas could be +stored directly in the DHT. + + +Consider tracking security issues with packages. + +Since sharing information with others about what packages you have +downloaded (and probably installed) is a possible security +vulnerability, it would be advantageous to not share that information +for packages that have known security vulnerabilities. This would +require some way of obtaining a list of which packages (and versions) +are vulnerable, which is not currently available. + + +Consider adding peer characteristics to the DHT. + +Bad peers could be indicated in the DHT by adding a new value that is +the NOT of their ID (so they are guaranteed not to store it) indicating +information about the peer. This could be bad votes on the peer, as +otherwise a peer could add good info about itself. + + +Consider adding pieces to the DHT instead of files. + +Instead of adding file hashes to the DHT, only piece hashes could be +added. This would allow a peer to upload to other peers while it is +still downloading the rest of the file. It is not clear that this is +needed, since peer's will not be uploading and downloading ery much of +the time.