-Files for which a hash cannot be found should not be added to the DHT.
+Consider what happens when we are the closest node.
-If the hash can't found, it stands to reason that other peers will not
-be able to find the hash either. So adding those files to the DHT will
-just clutter it with useless information. Examples include Release.gpg,
-Release, Translation-de.bz2, and Contents.gz.
+In some of the actions it is unclear what happens when we are one of the
+closest nodes to the target key. Do we store values that we publish
+ourself?
+
+
+Add all cache files to the database.
+
+All files in the cache should be added to the database, so that they can
+be checked to make sure nothing has happened to them. The database would
+then need a flag to indicate files that are hashed and available, but
+that shouldn't be added to the DHT.
Packages.diff files need to be considered.
adding them to the tracking done by the AptPackages module.
-Hashes need to be sent with requests for some files.
-
-Some files can change without changing the file name, since the file was
-added to the DHT by the peer. Examples are Release, Packages.gz, and
-Sources.bz2. For files like this (and only for files like this), the
-request to download from the peer should include the downloader's
-expected hash for the file as a new HTTP header. If the file is found,
-the cached hash for the file will be used to determine whether the
-request is for the same file as is currently available, and a special
-HTTP response can be sent if it is not (i.e. not a 404).
-
-Alternatively, consider sharing the files by hash instead of by
-directory. Then the request would be for
-http://127.3.45.9:9977/<urlencodedHash>, and it would always work. This
-would require a database lookup for every request.
-
-
PeerManager needs to download large files from multiple peers.
The PeerManager currently chooses a peer at random from the list of
consensus revealing the misbehaving peer.
-Consider storing torrent-like strings in the DHT.
-
-Instead of only storing the file download location (which would still be
-used for small files), a bencoded dictionary containing the peer's
-hashes of the individual pieces could be stored for the larger files
-(20% of all the files are larger than 512 KB ). This dictionary would
-have the download location, a list of the piece sizes, and a list of the
-piece hashes (bittorrent uses a single string of length 20*#pieces, but
-for general non-sha1 case a list is needed).
-
-These piece hashes could be compared ahead of time to determine which
-peers have the same piece hashes (they all should), and then used during
-the download to verify the downloaded pieces.
-
-Alternatively, the peers could store the torrent-like string for large
-files separately, and only contain a reference to it in their stored
-value for the hash of the file. The reference would be a hash of the
-bencoded dictionary, and a lookup of that hash in the DHT would give the
-torrent-like string. (A 100 MB file would result in 200 hashes, which
-would create a bencoded dictionary larger than 6000 bytes.)
-
-
-PeerManager needs to track peers' properties.
-
-The PeerManager needs to keep track of the observed properties of seen
-peers, to help determine a selection criteria for choosing peers to
-download from. Each property will give a value from 0 to 1. The relevant
-properties are:
-
- - hash errors in last day (1 = 0, 0 = 3+)
- - recent download speed (1 = fastest, 0 = 0)
- - lag time from request to download (1 = 0, 0 = 15s+)
- - number of pending requests (1 = 0, 0 = max (10))
- - whether a connection is open (1 = yes, 0.9 = no)
-
-These should be combined (multiplied) to provide a sort order for peers
-available to download from, which can then be used to assign new
-downloads to peers. Pieces should be downloaded from the best peers
-first (i.e. piece 0 from the absolute best peer).
-
-
-Missing Kademlia implementation details are needed.
-
-The current implementation is missing some important features, mostly
-focussed on storing values:
- - values need to be republished (every hour?)
- - original publishers need to republish values (every 24 hours)
- - when a new node is found that is closer to some values, replicate the
- values there without deleting them
- - when a value lookup succeeds, store the value in the closest node
- found that didn't have it
- - make the expiration time of a value exponentially inversely
- proportional to the number of nodes between the current node and the
- node closest to the value
+Consider tracking security issues with packages.
+
+Since sharing information with others about what packages you have
+downloaded (and probably installed) is a possible security
+vulnerability, it would be advantageous to not share that information
+for packages that have known security vulnerabilities. This would
+require some way of obtaining a list of which packages (and versions)
+are vulnerable, which is not currently available.
+
+
+Consider adding peer characteristics to the DHT.
+
+Bad peers could be indicated in the DHT by adding a new value that is
+the NOT of their ID (so they are guaranteed not to store it) indicating
+information about the peer. This could be bad votes on the peer, as
+otherwise a peer could add good info about itself.
+
+
+Consider adding pieces to the DHT instead of files.
+
+Instead of adding file hashes to the DHT, only piece hashes could be
+added. This would allow a peer to upload to other peers while it is
+still downloading the rest of the file. It is not clear that this is
+needed, since peer's will not be uploading and downloading ery much of
+the time.