X-Git-Url: https://git.mxchange.org/?p=quix0rs-apt-p2p.git;a=blobdiff_plain;f=TODO;h=b4951933a627a2d6a5b4d47e4f561401aaa4e188;hp=4677cdbb5dfa46b16ffb022f30368c36dd4693df;hb=d900237088b7832d2554c31b7436977bc5669348;hpb=442e18391e46e01bb21c95c82c5dd14e7bcba541 diff --git a/TODO b/TODO index 4677cdb..b495193 100644 --- a/TODO +++ b/TODO @@ -1,3 +1,18 @@ +Consider what happens when we are the closest node. + +In some of the actions it is unclear what happens when we are one of the +closest nodes to the target key. Do we store values that we publish +ourself? + + +Add all cache files to the database. + +All files in the cache should be added to the database, so that they can +be checked to make sure nothing has happened to them. The database would +then need a flag to indicate files that are hashed and available, but +that shouldn't be added to the DHT. + + Packages.diff files need to be considered. The Packages.diff/Index files contain hashes of Packages.diff/rred.gz @@ -32,71 +47,3 @@ originally provided the piece is probably at fault, since he is now providing a later piece. This doesn't work if the differing piece is the first piece, in which case it is downloaded from a 3rd peer, with consensus revealing the misbehaving peer. - - -Store and share torrent-like strings for large files. - -In addition to storing the file download location (which would still be -used for small files), a bencoded dictionary containing the peer's -hashes of the individual pieces could be stored for the larger files -(20% of all the files are larger than 512 KB). This dictionary would -have the normal piece size, the hash length, and a string containing the -piece hashes of length *<#pieces>. These piece hashes could -be compared ahead of time to determine which peers have the same piece -hashes (they all should), and then used during the download to verify -the downloaded pieces. - -For very large files (5 or more pieces), the torrent strings are too -long to store in the DHT and retrieve (a single UDP packet should be -less than 1472 bytes to avoid fragmentation). Instead, the peers should -store the torrent-like string for large files separately, and only -contain a reference to it in their stored value for the hash of the -file. The reference would be a hash of the bencoded dictionary. If the -torrent-like string is short enough to store in the DHT (i.e. less than -1472 bytes, or about 70 pieces for the SHA1 hash), then a -lookup of that hash in the DHT would give the torrent-like string. -Otherwise, a request to the peer for the hash (just like files are -downloaded), should return the bencoded torrent-like string. - - -PeerManager needs to track peers' properties. - -The PeerManager needs to keep track of the observed properties of seen -peers, to help determine a selection criteria for choosing peers to -download from. Each property will give a value from 0 to 1. The relevant -properties are: - - - hash errors in last day (1 = 0, 0 = 3+) - - recent download speed (1 = fastest, 0 = 0) - - lag time from request to download (1 = 0, 0 = 15s+) - - number of pending requests (1 = 0, 0 = max (10)) - - whether a connection is open (1 = yes, 0.9 = no) - -These should be combined (multiplied) to provide a sort order for peers -available to download from, which can then be used to assign new -downloads to peers. Pieces should be downloaded from the best peers -first (i.e. piece 0 from the absolute best peer). - - -When Looking Up Values, DHT Should Return Nodes and Values - -When a key has multiple values in the DHT, returning a stored value may not -be sufficient, as then no more nodes can be contacted to get more stored -values. Instead, return both the stored values and the list of closest -nodes so that the peer doing the lookup can decide when to stop looking -(when it has received enough values). - - -Missing Kademlia implementation details are needed. - -The current implementation is missing some important features, mostly -focussed on storing values: - - values need to be republished (every hour?) - - original publishers need to republish values (every 24 hours) - - when a new node is found that is closer to some values, replicate the - values there without deleting them - - when a value lookup succeeds, store the value in the closest node - found that didn't have it - - make the expiration time of a value exponentially inversely - proportional to the number of nodes between the current node and the - node closest to the value