Minutely will NOT necessarily run by the minute, because it depends on
site visitors. Busy sites will be able to do this, but sites where the
visitors (or search engine stuff or api calls) are more than a minute
apart, the interval will be much larger.
Generally the Cron plugin will run if there's still execution time for
1 second since starting the Action processing. If you want to change
this (such as disabling, 0 seconds, or maybe running bigger chunks,
for like 4 seconds) you can do this, where 'n' is time in seconds.
addPlugin('Cron', array('secs_per_action', n));
Add 'rel_to_pageload'=>false to the array if you want to run the queue
for a certain amount of seconds _despite_ maybe already having run that
long in the previous parts of Action processing.
Perhaps you want to run the cron script remotely, using a machine capable
of background processing (or locally, to avoid running daemon processes),
simply do an HTTP GET request to the route /main/cron of your GNU social.
Setting secs_per_action to 0 in the plugin config will imply that you run
all your queue handling by calling /main/cron (which runs as long as it can).
/main/cron will output "0" if it has finished processing, "1" if it should
be called again to complete processing (because it ran out of time due to
PHP's max_execution_time INI setting).
The Cron plugin also runs events as close to hourly, daily and weekly
as you get, based on the opportunistic method of running whenever a user
visits the site. This means of course that the cron events should be as
fast as possible, not only to avoid delaying page load for users but
also to minimize the risk of running into PHP's max_execution_time. One
suggestion is to only use the events to add new queue items for later processing.
These events are called CronHourly, CronDaily, CronWeekly - however there
is no guarantee that all events will execute, so some kind of failsafe,
transaction-ish method must be implemented in the future.
To make the StatusNet::addPlugin() accept only arrays,
the lib/default.php had to be changed because all plugins
had 'null' as default value instead of an array.
PuSH 0.4: No outgoing 'sync' verifications. Feed renewal script. No auto-renewal.
Among other things (such as permanent subscriptions), Pubsubhubbub 0.4
removed the "sync" verification method. This means that any incoming
PuSH subscription requests that follow the 0.4 spec won't really
_require_that we handle it as a background process, but if we were to
try direct verification of the subscription - and fail - there's no way
we could pick up the ball again. So _essentially_ we require background
processing with retries.
This means we must implement something like the "poorman cron" or
similar, so background processing can be handled
on-demand/on-site-visit. This is how Friendica, Drupal etc. handles it
and is necessary for environments where we can't run separate queue
daemons.
When the poorman-cron-ish thing is implemented, auto-renewal will work
for all users.
PuSH 0.4 spec:
https://pubsubhubbub.googlecode.com/git/pubsubhubbub-core-0.4.html
More on PuSH 0.4 release (incl. breaking changes):
https://groups.google.com/forum/#!msg/pubsubhubbub/7RPlYMds4RI/2mIHQTdV3aoJ
Notice metadata for WebFinger. Not sure if implemented properly.
This is more of a proof of concept and will likely not stay in exactly
this form. We should reasonably deliver the entire notice upon webfinger
querying.
NoResultException was the wrong choice in this case, because it was
not a DB_DataObject instance that performed the search, but a static
call to the Notice class.
SHARE activities would not be imported from federated instances for local notices
"[...] posts _local_ users (like you) make won't get data about "repeated by"
from federated users"
This was because the ActivityObject would processShare where the shared object
has a _local_ 'actor' URI. Ostatus_profile would complain this meant that a
"Local user cannot be referenced as remote.".
So we see if the shared activity object's id (URI) is in our Notice table, so
we don't have to processActivity - and can skip ensureActivityObjectProfile.
Florian Schmaus [Sat, 19 Oct 2013 10:19:00 +0000 (12:19 +0200)]
Improved plugins/Xmpp/README
Added the relevant section in INSTALL about queues and daemons to get
the plugin runnig.
Made resource required, as otherwise XMPPHP will send invalid from JIDs
in it's stanzas. For example when my configuration didn't had the
resource part, outbound stanzas looked like this:
<message
from="gnusocial@example.de/"
to="flow@example.de"
type='chat'>
<body>
User "flow" on GNU Social has said that your
XMPP/Jabber/GTalk screenname belongs to them.
…
</body>
</message>
Note the '/' at the end of the from attribute, without an actual
XMPP resource. But according to RFC6122 2.1 "every allowable portion of
a JID MUST NOT be zero bytes in length". Causing a jid-malformed
response from the server.
Also, it's nice to know that debug=true will print out all sent and
received stanzas, which helped me to debug the problem.
Furthermore I add a note that if the XMPP services uses DNS SRV records,
'host' has to be configured (in cases where service host != xmpp domain).