Added the following FIXME:
How should a Twitter user get their Inbox filled with foreign tweets?
Every imported Twitter user has a profile in the Profile table, so we
could setup a Subscription entry for each of those, meaning they get
collected in the InboxNoticeStream... But this would mean a lot of
unnecessary entries and listings that generally just point to the
locked down Twitter service.
Let's figure out a good relation so we can connect any profile to any
imported foreign notice, so it shows up in the "all" feed.
Also cleaned up and made typing stricter for the stream, so only
profiles can be submitted. This reasonably also means we can create
"inbox" or "all" streams for foreign profiles as well using the same
stream handler (but of course only for messages we already know about).
To avoid looking up posts for a long time in a large notice database,
the lookback period for the inbox is no longer than the profile creation
date. (this matches the behaviour of Inbox)
There was a problem with (specifically at least) PuSHpress for
Wordpress. A previous attempt to perform a DB transaction backfired
because the remote side could connect to the callback before our
commit had gone through.
I take full responsibility for introducing the bug in the first place :)
This will work without much extra effort because there will always be
more notices (higher value) than conversations (so no collisions).
But please run upgrade.php to avoid having an autoincrement id on
conversation table.
Installations using code after 2014-03-01 will have identical
conversation IDs to the initial (conversation root) notice IDs. This
will not affect older installations, which will have very different
values.
Dynamically enable scripts/queuedaemon.php into scripts/getvaliddaemons.php depending on common_config('queue', 'daemon') value. True = enabled, False=disabled. Default is false (see previous commit)
Many of the microapps are pretty javascript dependant, but at least
we should allow users to get to the new notice field without allowing
javascript to run in the browser. :)
My reasoning: Minifying makes third party review harder. A visitor on
a GNU social site should have no problem reading, understanding and
modifying javascripts for their own liking. A minified script is much
more difficult to use, reuse, modify and share.
Minutely will NOT necessarily run by the minute, because it depends on
site visitors. Busy sites will be able to do this, but sites where the
visitors (or search engine stuff or api calls) are more than a minute
apart, the interval will be much larger.
Generally the Cron plugin will run if there's still execution time for
1 second since starting the Action processing. If you want to change
this (such as disabling, 0 seconds, or maybe running bigger chunks,
for like 4 seconds) you can do this, where 'n' is time in seconds.
addPlugin('Cron', array('secs_per_action', n));
Add 'rel_to_pageload'=>false to the array if you want to run the queue
for a certain amount of seconds _despite_ maybe already having run that
long in the previous parts of Action processing.
Perhaps you want to run the cron script remotely, using a machine capable
of background processing (or locally, to avoid running daemon processes),
simply do an HTTP GET request to the route /main/cron of your GNU social.
Setting secs_per_action to 0 in the plugin config will imply that you run
all your queue handling by calling /main/cron (which runs as long as it can).
/main/cron will output "0" if it has finished processing, "1" if it should
be called again to complete processing (because it ran out of time due to
PHP's max_execution_time INI setting).
The Cron plugin also runs events as close to hourly, daily and weekly
as you get, based on the opportunistic method of running whenever a user
visits the site. This means of course that the cron events should be as
fast as possible, not only to avoid delaying page load for users but
also to minimize the risk of running into PHP's max_execution_time. One
suggestion is to only use the events to add new queue items for later processing.
These events are called CronHourly, CronDaily, CronWeekly - however there
is no guarantee that all events will execute, so some kind of failsafe,
transaction-ish method must be implemented in the future.
To make the StatusNet::addPlugin() accept only arrays,
the lib/default.php had to be changed because all plugins
had 'null' as default value instead of an array.
PuSH 0.4: No outgoing 'sync' verifications. Feed renewal script. No auto-renewal.
Among other things (such as permanent subscriptions), Pubsubhubbub 0.4
removed the "sync" verification method. This means that any incoming
PuSH subscription requests that follow the 0.4 spec won't really
_require_that we handle it as a background process, but if we were to
try direct verification of the subscription - and fail - there's no way
we could pick up the ball again. So _essentially_ we require background
processing with retries.
This means we must implement something like the "poorman cron" or
similar, so background processing can be handled
on-demand/on-site-visit. This is how Friendica, Drupal etc. handles it
and is necessary for environments where we can't run separate queue
daemons.
When the poorman-cron-ish thing is implemented, auto-renewal will work
for all users.
PuSH 0.4 spec:
https://pubsubhubbub.googlecode.com/git/pubsubhubbub-core-0.4.html
More on PuSH 0.4 release (incl. breaking changes):
https://groups.google.com/forum/#!msg/pubsubhubbub/7RPlYMds4RI/2mIHQTdV3aoJ