2006-01-02 18:04:38 +00:00
|
|
|
/*
|
2008-05-19 20:30:13 +00:00
|
|
|
* net/tipc/subscr.c: TIPC network topology service
|
2007-02-09 14:25:21 +00:00
|
|
|
*
|
2006-01-11 18:14:19 +00:00
|
|
|
* Copyright (c) 2000-2006, Ericsson AB
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
* Copyright (c) 2005-2007, 2010-2013, Wind River Systems
|
2006-01-02 18:04:38 +00:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
2006-01-11 12:30:43 +00:00
|
|
|
* Redistribution and use in source and binary forms, with or without
|
2006-01-02 18:04:38 +00:00
|
|
|
* modification, are permitted provided that the following conditions are met:
|
|
|
|
*
|
2006-01-11 12:30:43 +00:00
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. Neither the names of the copyright holders nor the names of its
|
|
|
|
* contributors may be used to endorse or promote products derived from
|
|
|
|
* this software without specific prior written permission.
|
2006-01-02 18:04:38 +00:00
|
|
|
*
|
2006-01-11 12:30:43 +00:00
|
|
|
* Alternatively, this software may be distributed under the terms of the
|
|
|
|
* GNU General Public License ("GPL") version 2 as published by the Free
|
|
|
|
* Software Foundation.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
|
|
|
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
|
|
|
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
|
|
|
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
|
|
|
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
2006-01-02 18:04:38 +00:00
|
|
|
* POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "core.h"
|
|
|
|
#include "name_table.h"
|
2008-05-19 20:30:13 +00:00
|
|
|
#include "subscr.h"
|
2006-01-02 18:04:38 +00:00
|
|
|
|
|
|
|
/**
|
2011-12-30 01:49:39 +00:00
|
|
|
* struct tipc_subscriber - TIPC network topology subscriber
|
tipc: involve reference counter for subscriber
At present subscriber's lock is used to protect the subscription list
of subscriber as well as subscriptions linked into the list. While one
or all subscriptions are deleted through iterating the list, the
subscriber's lock must be held. Meanwhile, as deletion of subscription
may happen in subscription timer's handler, the lock must be grabbed
in the function as well. When subscription's timer is terminated with
del_timer_sync() during above iteration, subscriber's lock has to be
temporarily released, otherwise, deadlock may occur. However, the
temporary release may cause the double free of a subscription as the
subscription is not disconnected from the subscription list.
Now if a reference counter is introduced to subscriber, subscription's
timer can be asynchronously stopped with del_timer(). As a result, the
issue is not only able to be fixed, but also relevant code is pretty
readable and understandable.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 02:36:46 +00:00
|
|
|
* @kref: reference counter to tipc_subscription object
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
* @conid: connection identifier to server connecting to subscriber
|
2014-01-12 20:48:00 +00:00
|
|
|
* @lock: control access to subscriber
|
2015-05-04 02:36:44 +00:00
|
|
|
* @subscrp_list: list of subscription objects for this subscriber
|
2006-01-02 18:04:38 +00:00
|
|
|
*/
|
2011-12-30 01:49:39 +00:00
|
|
|
struct tipc_subscriber {
|
tipc: involve reference counter for subscriber
At present subscriber's lock is used to protect the subscription list
of subscriber as well as subscriptions linked into the list. While one
or all subscriptions are deleted through iterating the list, the
subscriber's lock must be held. Meanwhile, as deletion of subscription
may happen in subscription timer's handler, the lock must be grabbed
in the function as well. When subscription's timer is terminated with
del_timer_sync() during above iteration, subscriber's lock has to be
temporarily released, otherwise, deadlock may occur. However, the
temporary release may cause the double free of a subscription as the
subscription is not disconnected from the subscription list.
Now if a reference counter is introduced to subscriber, subscription's
timer can be asynchronously stopped with del_timer(). As a result, the
issue is not only able to be fixed, but also relevant code is pretty
readable and understandable.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 02:36:46 +00:00
|
|
|
struct kref kref;
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
int conid;
|
|
|
|
spinlock_t lock;
|
2015-05-04 02:36:44 +00:00
|
|
|
struct list_head subscrp_list;
|
2006-01-02 18:04:38 +00:00
|
|
|
};
|
|
|
|
|
tipc: involve reference counter for subscriber
At present subscriber's lock is used to protect the subscription list
of subscriber as well as subscriptions linked into the list. While one
or all subscriptions are deleted through iterating the list, the
subscriber's lock must be held. Meanwhile, as deletion of subscription
may happen in subscription timer's handler, the lock must be grabbed
in the function as well. When subscription's timer is terminated with
del_timer_sync() during above iteration, subscriber's lock has to be
temporarily released, otherwise, deadlock may occur. However, the
temporary release may cause the double free of a subscription as the
subscription is not disconnected from the subscription list.
Now if a reference counter is introduced to subscriber, subscription's
timer can be asynchronously stopped with del_timer(). As a result, the
issue is not only able to be fixed, but also relevant code is pretty
readable and understandable.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 02:36:46 +00:00
|
|
|
static void tipc_subscrb_put(struct tipc_subscriber *subscriber);
|
|
|
|
|
2010-10-21 01:06:16 +00:00
|
|
|
/**
|
|
|
|
* htohl - convert value to endianness used by destination
|
|
|
|
* @in: value to convert
|
|
|
|
* @swap: non-zero if endianness must be reversed
|
|
|
|
*
|
|
|
|
* Returns converted value
|
|
|
|
*/
|
|
|
|
static u32 htohl(u32 in, int swap)
|
|
|
|
{
|
|
|
|
return swap ? swab32(in) : in;
|
|
|
|
}
|
|
|
|
|
2015-05-04 02:36:44 +00:00
|
|
|
static void tipc_subscrp_send_event(struct tipc_subscription *sub,
|
|
|
|
u32 found_lower, u32 found_upper,
|
|
|
|
u32 event, u32 port_ref, u32 node)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2015-01-09 07:27:11 +00:00
|
|
|
struct tipc_net *tn = net_generic(sub->net, tipc_net_id);
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
struct tipc_subscriber *subscriber = sub->subscriber;
|
|
|
|
struct kvec msg_sect;
|
2006-01-02 18:04:38 +00:00
|
|
|
|
|
|
|
msg_sect.iov_base = (void *)&sub->evt;
|
|
|
|
msg_sect.iov_len = sizeof(struct tipc_event);
|
2010-10-21 01:06:16 +00:00
|
|
|
sub->evt.event = htohl(event, sub->swap);
|
|
|
|
sub->evt.found_lower = htohl(found_lower, sub->swap);
|
|
|
|
sub->evt.found_upper = htohl(found_upper, sub->swap);
|
|
|
|
sub->evt.port.ref = htohl(port_ref, sub->swap);
|
|
|
|
sub->evt.port.node = htohl(node, sub->swap);
|
2015-01-09 07:27:11 +00:00
|
|
|
tipc_conn_sendmsg(tn->topsrv, subscriber->conid, NULL,
|
|
|
|
msg_sect.iov_base, msg_sect.iov_len);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2015-05-04 02:36:44 +00:00
|
|
|
* tipc_subscrp_check_overlap - test for subscription overlap with the
|
|
|
|
* given values
|
2006-01-02 18:04:38 +00:00
|
|
|
*
|
|
|
|
* Returns 1 if there is overlap, otherwise 0.
|
|
|
|
*/
|
2016-02-02 09:52:10 +00:00
|
|
|
int tipc_subscrp_check_overlap(struct tipc_name_seq *seq, u32 found_lower,
|
2015-05-04 02:36:44 +00:00
|
|
|
u32 found_upper)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2016-02-02 09:52:10 +00:00
|
|
|
if (found_lower < seq->lower)
|
|
|
|
found_lower = seq->lower;
|
|
|
|
if (found_upper > seq->upper)
|
|
|
|
found_upper = seq->upper;
|
2006-01-02 18:04:38 +00:00
|
|
|
if (found_lower > found_upper)
|
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-02-02 09:52:10 +00:00
|
|
|
u32 tipc_subscrp_convert_seq_type(u32 type, int swap)
|
|
|
|
{
|
|
|
|
return htohl(type, swap);
|
|
|
|
}
|
|
|
|
|
|
|
|
void tipc_subscrp_convert_seq(struct tipc_name_seq *in, int swap,
|
|
|
|
struct tipc_name_seq *out)
|
|
|
|
{
|
|
|
|
out->type = htohl(in->type, swap);
|
|
|
|
out->lower = htohl(in->lower, swap);
|
|
|
|
out->upper = htohl(in->upper, swap);
|
|
|
|
}
|
|
|
|
|
2015-05-04 02:36:44 +00:00
|
|
|
void tipc_subscrp_report_overlap(struct tipc_subscription *sub, u32 found_lower,
|
|
|
|
u32 found_upper, u32 event, u32 port_ref,
|
|
|
|
u32 node, int must)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2016-02-02 09:52:10 +00:00
|
|
|
struct tipc_name_seq seq;
|
|
|
|
|
|
|
|
tipc_subscrp_convert_seq(&sub->evt.s.seq, sub->swap, &seq);
|
|
|
|
if (!tipc_subscrp_check_overlap(&seq, found_lower, found_upper))
|
2006-01-02 18:04:38 +00:00
|
|
|
return;
|
2016-02-02 09:52:09 +00:00
|
|
|
if (!must &&
|
|
|
|
!(htohl(sub->evt.s.filter, sub->swap) & TIPC_SUB_PORTS))
|
2006-01-02 18:04:38 +00:00
|
|
|
return;
|
2008-05-19 20:27:31 +00:00
|
|
|
|
2015-05-04 02:36:44 +00:00
|
|
|
tipc_subscrp_send_event(sub, found_lower, found_upper, event, port_ref,
|
|
|
|
node);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
2017-10-30 21:06:45 +00:00
|
|
|
static void tipc_subscrp_timeout(struct timer_list *t)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2017-10-30 21:06:45 +00:00
|
|
|
struct tipc_subscription *sub = from_timer(sub, t, timer);
|
2017-03-21 09:47:49 +00:00
|
|
|
struct tipc_subscriber *subscriber = sub->subscriber;
|
|
|
|
|
|
|
|
spin_lock_bh(&subscriber->lock);
|
|
|
|
tipc_nametbl_unsubscribe(sub);
|
2017-03-28 10:28:27 +00:00
|
|
|
list_del(&sub->subscrp_list);
|
2017-03-21 09:47:49 +00:00
|
|
|
spin_unlock_bh(&subscriber->lock);
|
2008-05-19 20:29:47 +00:00
|
|
|
|
|
|
|
/* Notify subscriber of timeout */
|
2015-05-04 02:36:44 +00:00
|
|
|
tipc_subscrp_send_event(sub, sub->evt.s.seq.lower, sub->evt.s.seq.upper,
|
|
|
|
TIPC_SUBSCR_TIMEOUT, 0, 0);
|
2008-05-19 20:29:47 +00:00
|
|
|
|
2017-01-24 12:00:44 +00:00
|
|
|
tipc_subscrp_put(sub);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
tipc: involve reference counter for subscriber
At present subscriber's lock is used to protect the subscription list
of subscriber as well as subscriptions linked into the list. While one
or all subscriptions are deleted through iterating the list, the
subscriber's lock must be held. Meanwhile, as deletion of subscription
may happen in subscription timer's handler, the lock must be grabbed
in the function as well. When subscription's timer is terminated with
del_timer_sync() during above iteration, subscriber's lock has to be
temporarily released, otherwise, deadlock may occur. However, the
temporary release may cause the double free of a subscription as the
subscription is not disconnected from the subscription list.
Now if a reference counter is introduced to subscriber, subscription's
timer can be asynchronously stopped with del_timer(). As a result, the
issue is not only able to be fixed, but also relevant code is pretty
readable and understandable.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 02:36:46 +00:00
|
|
|
static void tipc_subscrb_kref_release(struct kref *kref)
|
2006-10-17 04:59:42 +00:00
|
|
|
{
|
2017-01-24 12:00:44 +00:00
|
|
|
kfree(container_of(kref,struct tipc_subscriber, kref));
|
tipc: involve reference counter for subscriber
At present subscriber's lock is used to protect the subscription list
of subscriber as well as subscriptions linked into the list. While one
or all subscriptions are deleted through iterating the list, the
subscriber's lock must be held. Meanwhile, as deletion of subscription
may happen in subscription timer's handler, the lock must be grabbed
in the function as well. When subscription's timer is terminated with
del_timer_sync() during above iteration, subscriber's lock has to be
temporarily released, otherwise, deadlock may occur. However, the
temporary release may cause the double free of a subscription as the
subscription is not disconnected from the subscription list.
Now if a reference counter is introduced to subscriber, subscription's
timer can be asynchronously stopped with del_timer(). As a result, the
issue is not only able to be fixed, but also relevant code is pretty
readable and understandable.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 02:36:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void tipc_subscrb_put(struct tipc_subscriber *subscriber)
|
|
|
|
{
|
|
|
|
kref_put(&subscriber->kref, tipc_subscrb_kref_release);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void tipc_subscrb_get(struct tipc_subscriber *subscriber)
|
|
|
|
{
|
|
|
|
kref_get(&subscriber->kref);
|
2006-10-17 04:59:42 +00:00
|
|
|
}
|
|
|
|
|
2017-01-24 12:00:44 +00:00
|
|
|
static void tipc_subscrp_kref_release(struct kref *kref)
|
|
|
|
{
|
|
|
|
struct tipc_subscription *sub = container_of(kref,
|
|
|
|
struct tipc_subscription,
|
|
|
|
kref);
|
|
|
|
struct tipc_net *tn = net_generic(sub->net, tipc_net_id);
|
|
|
|
struct tipc_subscriber *subscriber = sub->subscriber;
|
|
|
|
|
|
|
|
atomic_dec(&tn->subscription_count);
|
|
|
|
kfree(sub);
|
|
|
|
tipc_subscrb_put(subscriber);
|
|
|
|
}
|
|
|
|
|
2017-03-28 10:28:28 +00:00
|
|
|
void tipc_subscrp_put(struct tipc_subscription *subscription)
|
2017-01-24 12:00:44 +00:00
|
|
|
{
|
|
|
|
kref_put(&subscription->kref, tipc_subscrp_kref_release);
|
|
|
|
}
|
|
|
|
|
2017-03-28 10:28:28 +00:00
|
|
|
void tipc_subscrp_get(struct tipc_subscription *subscription)
|
2017-01-24 12:00:44 +00:00
|
|
|
{
|
|
|
|
kref_get(&subscription->kref);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* tipc_subscrb_subscrp_delete - delete a specific subscription or all
|
|
|
|
* subscriptions for a given subscriber.
|
|
|
|
*/
|
|
|
|
static void tipc_subscrb_subscrp_delete(struct tipc_subscriber *subscriber,
|
|
|
|
struct tipc_subscr *s)
|
|
|
|
{
|
|
|
|
struct list_head *subscription_list = &subscriber->subscrp_list;
|
|
|
|
struct tipc_subscription *sub, *temp;
|
2017-08-22 10:28:40 +00:00
|
|
|
u32 timeout;
|
2017-01-24 12:00:44 +00:00
|
|
|
|
|
|
|
spin_lock_bh(&subscriber->lock);
|
|
|
|
list_for_each_entry_safe(sub, temp, subscription_list, subscrp_list) {
|
|
|
|
if (s && memcmp(s, &sub->evt.s, sizeof(struct tipc_subscr)))
|
|
|
|
continue;
|
|
|
|
|
2017-08-22 10:28:40 +00:00
|
|
|
timeout = htohl(sub->evt.s.timeout, sub->swap);
|
|
|
|
if (timeout == TIPC_WAIT_FOREVER || del_timer(&sub->timer)) {
|
|
|
|
tipc_nametbl_unsubscribe(sub);
|
|
|
|
list_del(&sub->subscrp_list);
|
|
|
|
tipc_subscrp_put(sub);
|
|
|
|
}
|
2017-01-24 12:00:44 +00:00
|
|
|
|
|
|
|
if (s)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&subscriber->lock);
|
|
|
|
}
|
|
|
|
|
2015-05-04 02:36:45 +00:00
|
|
|
static struct tipc_subscriber *tipc_subscrb_create(int conid)
|
|
|
|
{
|
|
|
|
struct tipc_subscriber *subscriber;
|
|
|
|
|
|
|
|
subscriber = kzalloc(sizeof(*subscriber), GFP_ATOMIC);
|
|
|
|
if (!subscriber) {
|
|
|
|
pr_warn("Subscriber rejected, no memory\n");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
INIT_LIST_HEAD(&subscriber->subscrp_list);
|
2017-01-24 12:00:44 +00:00
|
|
|
kref_init(&subscriber->kref);
|
2015-05-04 02:36:45 +00:00
|
|
|
subscriber->conid = conid;
|
|
|
|
spin_lock_init(&subscriber->lock);
|
|
|
|
|
|
|
|
return subscriber;
|
|
|
|
}
|
|
|
|
|
2015-05-04 02:36:44 +00:00
|
|
|
static void tipc_subscrb_delete(struct tipc_subscriber *subscriber)
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
{
|
2017-01-24 12:00:44 +00:00
|
|
|
tipc_subscrb_subscrp_delete(subscriber, NULL);
|
tipc: involve reference counter for subscriber
At present subscriber's lock is used to protect the subscription list
of subscriber as well as subscriptions linked into the list. While one
or all subscriptions are deleted through iterating the list, the
subscriber's lock must be held. Meanwhile, as deletion of subscription
may happen in subscription timer's handler, the lock must be grabbed
in the function as well. When subscription's timer is terminated with
del_timer_sync() during above iteration, subscriber's lock has to be
temporarily released, otherwise, deadlock may occur. However, the
temporary release may cause the double free of a subscription as the
subscription is not disconnected from the subscription list.
Now if a reference counter is introduced to subscriber, subscription's
timer can be asynchronously stopped with del_timer(). As a result, the
issue is not only able to be fixed, but also relevant code is pretty
readable and understandable.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-04 02:36:46 +00:00
|
|
|
tipc_subscrb_put(subscriber);
|
|
|
|
}
|
|
|
|
|
2015-05-04 02:36:44 +00:00
|
|
|
static void tipc_subscrp_cancel(struct tipc_subscr *s,
|
|
|
|
struct tipc_subscriber *subscriber)
|
2006-10-17 04:59:42 +00:00
|
|
|
{
|
tipc: fix a race condition of releasing subscriber object
No matter whether a request is inserted into workqueue as a work item
to cancel a subscription or to delete a subscription's subscriber
asynchronously, the work items may be executed in different workers.
As a result, it doesn't mean that one request which is raised prior to
another request is definitely handled before the latter. By contrast,
if the latter request is executed before the former request, below
error may happen:
[ 656.183644] BUG: spinlock bad magic on CPU#0, kworker/u8:0/12117
[ 656.184487] general protection fault: 0000 [#1] SMP
[ 656.185160] Modules linked in: tipc ip6_udp_tunnel udp_tunnel 9pnet_virtio 9p 9pnet virtio_net virtio_pci virtio_ring virtio [last unloaded: ip6_udp_tunnel]
[ 656.187003] CPU: 0 PID: 12117 Comm: kworker/u8:0 Not tainted 4.11.0-rc7+ #6
[ 656.187920] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 656.188690] Workqueue: tipc_rcv tipc_recv_work [tipc]
[ 656.189371] task: ffff88003f5cec40 task.stack: ffffc90004448000
[ 656.190157] RIP: 0010:spin_bug+0xdd/0xf0
[ 656.190678] RSP: 0018:ffffc9000444bcb8 EFLAGS: 00010202
[ 656.191375] RAX: 0000000000000034 RBX: ffff88003f8d1388 RCX: 0000000000000000
[ 656.192321] RDX: ffff88003ba13708 RSI: ffff88003ba0cd08 RDI: ffff88003ba0cd08
[ 656.193265] RBP: ffffc9000444bcd0 R08: 0000000000000030 R09: 000000006b6b6b6b
[ 656.194208] R10: ffff8800bde3e000 R11: 00000000000001b4 R12: 6b6b6b6b6b6b6b6b
[ 656.195157] R13: ffffffff81a3ca64 R14: ffff88003f8d1388 R15: ffff88003f8d13a0
[ 656.196101] FS: 0000000000000000(0000) GS:ffff88003ba00000(0000) knlGS:0000000000000000
[ 656.197172] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 656.197935] CR2: 00007f0b3d2e6000 CR3: 000000003ef9e000 CR4: 00000000000006f0
[ 656.198873] Call Trace:
[ 656.199210] do_raw_spin_lock+0x66/0xa0
[ 656.199735] _raw_spin_lock_bh+0x19/0x20
[ 656.200258] tipc_subscrb_subscrp_delete+0x28/0xf0 [tipc]
[ 656.200990] tipc_subscrb_rcv_cb+0x45/0x260 [tipc]
[ 656.201632] tipc_receive_from_sock+0xaf/0x100 [tipc]
[ 656.202299] tipc_recv_work+0x2b/0x60 [tipc]
[ 656.202872] process_one_work+0x157/0x420
[ 656.203404] worker_thread+0x69/0x4c0
[ 656.203898] kthread+0x138/0x170
[ 656.204328] ? process_one_work+0x420/0x420
[ 656.204889] ? kthread_create_on_node+0x40/0x40
[ 656.205527] ret_from_fork+0x29/0x40
[ 656.206012] Code: 48 8b 0c 25 00 c5 00 00 48 c7 c7 f0 24 a3 81 48 81 c1 f0 05 00 00 65 8b 15 61 ef f5 7e e8 9a 4c 09 00 4d 85 e4 44 8b 4b 08 74 92 <45> 8b 84 24 40 04 00 00 49 8d 8c 24 f0 05 00 00 eb 8d 90 0f 1f
[ 656.208504] RIP: spin_bug+0xdd/0xf0 RSP: ffffc9000444bcb8
[ 656.209798] ---[ end trace e2a800e6eb0770be ]---
In above scenario, the request of deleting subscriber was performed
earlier than the request of canceling a subscription although the
latter was issued before the former, which means tipc_subscrb_delete()
was called before tipc_subscrp_cancel(). As a result, when
tipc_subscrb_subscrp_delete() called by tipc_subscrp_cancel() was
executed to cancel a subscription, the subscription's subscriber
refcnt had been decreased to 1. After tipc_subscrp_delete() where
the subscriber was freed because its refcnt was decremented to zero,
but the subscriber's lock had to be released, as a consequence, panic
happened.
By contrast, if we increase subscriber's refcnt before
tipc_subscrb_subscrp_delete() is called in tipc_subscrp_cancel(),
the panic issue can be avoided.
Fixes: d094c4d5f5c7 ("tipc: add subscription refcount to avoid invalid delete")
Reported-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-22 10:28:41 +00:00
|
|
|
tipc_subscrb_get(subscriber);
|
2017-01-24 12:00:44 +00:00
|
|
|
tipc_subscrb_subscrp_delete(subscriber, s);
|
tipc: fix a race condition of releasing subscriber object
No matter whether a request is inserted into workqueue as a work item
to cancel a subscription or to delete a subscription's subscriber
asynchronously, the work items may be executed in different workers.
As a result, it doesn't mean that one request which is raised prior to
another request is definitely handled before the latter. By contrast,
if the latter request is executed before the former request, below
error may happen:
[ 656.183644] BUG: spinlock bad magic on CPU#0, kworker/u8:0/12117
[ 656.184487] general protection fault: 0000 [#1] SMP
[ 656.185160] Modules linked in: tipc ip6_udp_tunnel udp_tunnel 9pnet_virtio 9p 9pnet virtio_net virtio_pci virtio_ring virtio [last unloaded: ip6_udp_tunnel]
[ 656.187003] CPU: 0 PID: 12117 Comm: kworker/u8:0 Not tainted 4.11.0-rc7+ #6
[ 656.187920] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 656.188690] Workqueue: tipc_rcv tipc_recv_work [tipc]
[ 656.189371] task: ffff88003f5cec40 task.stack: ffffc90004448000
[ 656.190157] RIP: 0010:spin_bug+0xdd/0xf0
[ 656.190678] RSP: 0018:ffffc9000444bcb8 EFLAGS: 00010202
[ 656.191375] RAX: 0000000000000034 RBX: ffff88003f8d1388 RCX: 0000000000000000
[ 656.192321] RDX: ffff88003ba13708 RSI: ffff88003ba0cd08 RDI: ffff88003ba0cd08
[ 656.193265] RBP: ffffc9000444bcd0 R08: 0000000000000030 R09: 000000006b6b6b6b
[ 656.194208] R10: ffff8800bde3e000 R11: 00000000000001b4 R12: 6b6b6b6b6b6b6b6b
[ 656.195157] R13: ffffffff81a3ca64 R14: ffff88003f8d1388 R15: ffff88003f8d13a0
[ 656.196101] FS: 0000000000000000(0000) GS:ffff88003ba00000(0000) knlGS:0000000000000000
[ 656.197172] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 656.197935] CR2: 00007f0b3d2e6000 CR3: 000000003ef9e000 CR4: 00000000000006f0
[ 656.198873] Call Trace:
[ 656.199210] do_raw_spin_lock+0x66/0xa0
[ 656.199735] _raw_spin_lock_bh+0x19/0x20
[ 656.200258] tipc_subscrb_subscrp_delete+0x28/0xf0 [tipc]
[ 656.200990] tipc_subscrb_rcv_cb+0x45/0x260 [tipc]
[ 656.201632] tipc_receive_from_sock+0xaf/0x100 [tipc]
[ 656.202299] tipc_recv_work+0x2b/0x60 [tipc]
[ 656.202872] process_one_work+0x157/0x420
[ 656.203404] worker_thread+0x69/0x4c0
[ 656.203898] kthread+0x138/0x170
[ 656.204328] ? process_one_work+0x420/0x420
[ 656.204889] ? kthread_create_on_node+0x40/0x40
[ 656.205527] ret_from_fork+0x29/0x40
[ 656.206012] Code: 48 8b 0c 25 00 c5 00 00 48 c7 c7 f0 24 a3 81 48 81 c1 f0 05 00 00 65 8b 15 61 ef f5 7e e8 9a 4c 09 00 4d 85 e4 44 8b 4b 08 74 92 <45> 8b 84 24 40 04 00 00 49 8d 8c 24 f0 05 00 00 eb 8d 90 0f 1f
[ 656.208504] RIP: spin_bug+0xdd/0xf0 RSP: ffffc9000444bcb8
[ 656.209798] ---[ end trace e2a800e6eb0770be ]---
In above scenario, the request of deleting subscriber was performed
earlier than the request of canceling a subscription although the
latter was issued before the former, which means tipc_subscrb_delete()
was called before tipc_subscrp_cancel(). As a result, when
tipc_subscrb_subscrp_delete() called by tipc_subscrp_cancel() was
executed to cancel a subscription, the subscription's subscriber
refcnt had been decreased to 1. After tipc_subscrp_delete() where
the subscriber was freed because its refcnt was decremented to zero,
but the subscriber's lock had to be released, as a consequence, panic
happened.
By contrast, if we increase subscriber's refcnt before
tipc_subscrb_subscrp_delete() is called in tipc_subscrp_cancel(),
the panic issue can be avoided.
Fixes: d094c4d5f5c7 ("tipc: add subscription refcount to avoid invalid delete")
Reported-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-22 10:28:41 +00:00
|
|
|
tipc_subscrb_put(subscriber);
|
2006-10-17 04:59:42 +00:00
|
|
|
}
|
|
|
|
|
2016-02-02 09:52:11 +00:00
|
|
|
static struct tipc_subscription *tipc_subscrp_create(struct net *net,
|
|
|
|
struct tipc_subscr *s,
|
2016-02-02 09:52:12 +00:00
|
|
|
int swap)
|
2015-01-09 07:27:11 +00:00
|
|
|
{
|
|
|
|
struct tipc_net *tn = net_generic(net, tipc_net_id);
|
2011-12-30 01:43:44 +00:00
|
|
|
struct tipc_subscription *sub;
|
2016-02-02 09:52:12 +00:00
|
|
|
u32 filter = htohl(s->filter, swap);
|
2006-10-17 04:59:42 +00:00
|
|
|
|
2006-01-02 18:04:38 +00:00
|
|
|
/* Refuse subscription if global limit exceeded */
|
2015-01-09 07:27:11 +00:00
|
|
|
if (atomic_read(&tn->subscription_count) >= TIPC_MAX_SUBSCRIPTIONS) {
|
2012-06-29 04:16:37 +00:00
|
|
|
pr_warn("Subscription rejected, limit reached (%u)\n",
|
2012-08-16 12:09:13 +00:00
|
|
|
TIPC_MAX_SUBSCRIPTIONS);
|
2016-02-02 09:52:11 +00:00
|
|
|
return NULL;
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Allocate subscription object */
|
2008-05-19 20:29:47 +00:00
|
|
|
sub = kmalloc(sizeof(*sub), GFP_ATOMIC);
|
2006-06-26 06:52:17 +00:00
|
|
|
if (!sub) {
|
2012-06-29 04:16:37 +00:00
|
|
|
pr_warn("Subscription rejected, no memory\n");
|
2016-02-02 09:52:11 +00:00
|
|
|
return NULL;
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize subscription object */
|
2015-01-09 07:27:09 +00:00
|
|
|
sub->net = net;
|
2016-02-02 09:52:09 +00:00
|
|
|
if (((filter & TIPC_SUB_PORTS) && (filter & TIPC_SUB_SERVICE)) ||
|
2016-02-02 09:52:10 +00:00
|
|
|
(htohl(s->seq.lower, swap) > htohl(s->seq.upper, swap))) {
|
2012-06-29 04:16:37 +00:00
|
|
|
pr_warn("Subscription rejected, illegal request\n");
|
2006-01-02 18:04:38 +00:00
|
|
|
kfree(sub);
|
2016-02-02 09:52:11 +00:00
|
|
|
return NULL;
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
2016-02-02 09:52:09 +00:00
|
|
|
|
2010-10-21 01:06:16 +00:00
|
|
|
sub->swap = swap;
|
2015-05-04 02:36:44 +00:00
|
|
|
memcpy(&sub->evt.s, s, sizeof(*s));
|
2015-01-09 07:27:11 +00:00
|
|
|
atomic_inc(&tn->subscription_count);
|
2017-01-24 12:00:44 +00:00
|
|
|
kref_init(&sub->kref);
|
2016-02-02 09:52:11 +00:00
|
|
|
return sub;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void tipc_subscrp_subscribe(struct net *net, struct tipc_subscr *s,
|
tipc: add option to suppress PUBLISH events for pre-existing publications
Currently, when a user is subscribing for binding table publications,
he will receive a PUBLISH event for all already existing matching items
in the binding table.
However, a group socket making a subscriptions doesn't need this initial
status update from the binding table, because it has already scanned it
during the join operation. Worse, the multiplicatory effect of issuing
mutual events for dozens or hundreds group members within a short time
frame put a heavy load on the topology server, with the end result that
scale out operations on a big group tend to take much longer than needed.
We now add a new filter option, TIPC_SUB_NO_STATUS, for topology server
subscriptions, so that this initial avalanche of events is suppressed.
This change, along with the previous commit, significantly improves the
range and speed of group scale out operations.
We keep the new option internal for the tipc driver, at least for now.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-08 20:03:29 +00:00
|
|
|
struct tipc_subscriber *subscriber, int swap,
|
|
|
|
bool status)
|
2016-02-02 09:52:11 +00:00
|
|
|
{
|
|
|
|
struct tipc_net *tn = net_generic(net, tipc_net_id);
|
|
|
|
struct tipc_subscription *sub = NULL;
|
|
|
|
u32 timeout;
|
|
|
|
|
2016-02-02 09:52:12 +00:00
|
|
|
sub = tipc_subscrp_create(net, s, swap);
|
2016-02-02 09:52:11 +00:00
|
|
|
if (!sub)
|
|
|
|
return tipc_conn_terminate(tn->topsrv, subscriber->conid);
|
|
|
|
|
|
|
|
spin_lock_bh(&subscriber->lock);
|
|
|
|
list_add(&sub->subscrp_list, &subscriber->subscrp_list);
|
2016-02-02 09:52:14 +00:00
|
|
|
sub->subscriber = subscriber;
|
tipc: add option to suppress PUBLISH events for pre-existing publications
Currently, when a user is subscribing for binding table publications,
he will receive a PUBLISH event for all already existing matching items
in the binding table.
However, a group socket making a subscriptions doesn't need this initial
status update from the binding table, because it has already scanned it
during the join operation. Worse, the multiplicatory effect of issuing
mutual events for dozens or hundreds group members within a short time
frame put a heavy load on the topology server, with the end result that
scale out operations on a big group tend to take much longer than needed.
We now add a new filter option, TIPC_SUB_NO_STATUS, for topology server
subscriptions, so that this initial avalanche of events is suppressed.
This change, along with the previous commit, significantly improves the
range and speed of group scale out operations.
We keep the new option internal for the tipc driver, at least for now.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-08 20:03:29 +00:00
|
|
|
tipc_nametbl_subscribe(sub, status);
|
2017-01-24 12:00:44 +00:00
|
|
|
tipc_subscrb_get(subscriber);
|
2016-02-02 09:52:11 +00:00
|
|
|
spin_unlock_bh(&subscriber->lock);
|
|
|
|
|
2017-10-30 21:06:45 +00:00
|
|
|
timer_setup(&sub->timer, tipc_subscrp_timeout, 0);
|
2016-02-02 09:52:12 +00:00
|
|
|
timeout = htohl(sub->evt.s.timeout, swap);
|
2016-02-02 09:52:16 +00:00
|
|
|
|
2017-01-24 12:00:44 +00:00
|
|
|
if (timeout != TIPC_WAIT_FOREVER)
|
|
|
|
mod_timer(&sub->timer, jiffies + msecs_to_jiffies(timeout));
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
/* Handle one termination request for the subscriber */
|
2016-04-12 11:05:21 +00:00
|
|
|
static void tipc_subscrb_release_cb(int conid, void *usr_data)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2015-05-04 02:36:44 +00:00
|
|
|
tipc_subscrb_delete((struct tipc_subscriber *)usr_data);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
/* Handle one request to create a new subscription for the subscriber */
|
2015-05-04 02:36:44 +00:00
|
|
|
static void tipc_subscrb_rcv_cb(struct net *net, int conid,
|
|
|
|
struct sockaddr_tipc *addr, void *usr_data,
|
|
|
|
void *buf, size_t len)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2016-02-02 09:52:12 +00:00
|
|
|
struct tipc_subscriber *subscriber = usr_data;
|
|
|
|
struct tipc_subscr *s = (struct tipc_subscr *)buf;
|
tipc: add option to suppress PUBLISH events for pre-existing publications
Currently, when a user is subscribing for binding table publications,
he will receive a PUBLISH event for all already existing matching items
in the binding table.
However, a group socket making a subscriptions doesn't need this initial
status update from the binding table, because it has already scanned it
during the join operation. Worse, the multiplicatory effect of issuing
mutual events for dozens or hundreds group members within a short time
frame put a heavy load on the topology server, with the end result that
scale out operations on a big group tend to take much longer than needed.
We now add a new filter option, TIPC_SUB_NO_STATUS, for topology server
subscriptions, so that this initial avalanche of events is suppressed.
This change, along with the previous commit, significantly improves the
range and speed of group scale out operations.
We keep the new option internal for the tipc driver, at least for now.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-08 20:03:29 +00:00
|
|
|
bool status;
|
2016-02-02 09:52:12 +00:00
|
|
|
int swap;
|
|
|
|
|
|
|
|
/* Determine subscriber's endianness */
|
2016-02-02 09:52:13 +00:00
|
|
|
swap = !(s->filter & (TIPC_SUB_PORTS | TIPC_SUB_SERVICE |
|
|
|
|
TIPC_SUB_CANCEL));
|
2016-02-02 09:52:12 +00:00
|
|
|
|
|
|
|
/* Detect & process a subscription cancellation request */
|
|
|
|
if (s->filter & htohl(TIPC_SUB_CANCEL, swap)) {
|
|
|
|
s->filter &= ~htohl(TIPC_SUB_CANCEL, swap);
|
|
|
|
return tipc_subscrp_cancel(s, subscriber);
|
|
|
|
}
|
tipc: add option to suppress PUBLISH events for pre-existing publications
Currently, when a user is subscribing for binding table publications,
he will receive a PUBLISH event for all already existing matching items
in the binding table.
However, a group socket making a subscriptions doesn't need this initial
status update from the binding table, because it has already scanned it
during the join operation. Worse, the multiplicatory effect of issuing
mutual events for dozens or hundreds group members within a short time
frame put a heavy load on the topology server, with the end result that
scale out operations on a big group tend to take much longer than needed.
We now add a new filter option, TIPC_SUB_NO_STATUS, for topology server
subscriptions, so that this initial avalanche of events is suppressed.
This change, along with the previous commit, significantly improves the
range and speed of group scale out operations.
We keep the new option internal for the tipc driver, at least for now.
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-08 20:03:29 +00:00
|
|
|
status = !(s->filter & htohl(TIPC_SUB_NO_STATUS, swap));
|
|
|
|
tipc_subscrp_subscribe(net, s, subscriber, swap, status);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
tipc: convert topology server to use new server facility
As the new TIPC server infrastructure has been introduced, we can
now convert the TIPC topology server to it. We get two benefits
from doing this:
1) It simplifies the topology server locking policy. In the
original locking policy, we placed one spin lock pointer in the
tipc_subscriber structure to reuse the lock of the subscriber's
server port, controlling access to members of tipc_subscriber
instance. That is, we only used one lock to ensure both
tipc_port and tipc_subscriber members were safely accessed.
Now we introduce another spin lock for tipc_subscriber structure
only protecting themselves, to get a finer granularity locking
policy. Moreover, the change will allow us to make the topology
server code more readable and maintainable.
2) It fixes a bug where sent subscription events may be lost when
the topology port is congested. Using the new service, the
topology server now queues sent events into an outgoing buffer,
and then wakes up a sender process which has been blocked in
workqueue context. The process will keep picking events from the
buffer and send them to their respective subscribers, using the
kernel socket interface, until the buffer is empty. Even if the
socket is congested during transmission there is no risk that
events may be dropped, since the sender process may block when
needed.
Some minor reordering of initialization is done, since we now
have a scenario where the topology server must be started after
socket initialization has taken place, as the former depends
on the latter. And overall, we see a simplification of the
TIPC subscriber code in making this changeover.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-17 14:54:40 +00:00
|
|
|
/* Handle one request to establish a new subscriber */
|
2015-05-04 02:36:44 +00:00
|
|
|
static void *tipc_subscrb_connect_cb(int conid)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2015-05-04 02:36:45 +00:00
|
|
|
return (void *)tipc_subscrb_create(conid);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
2015-05-04 02:36:44 +00:00
|
|
|
int tipc_topsrv_start(struct net *net)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2015-01-09 07:27:11 +00:00
|
|
|
struct tipc_net *tn = net_generic(net, tipc_net_id);
|
|
|
|
const char name[] = "topology_server";
|
|
|
|
struct tipc_server *topsrv;
|
|
|
|
struct sockaddr_tipc *saddr;
|
|
|
|
|
|
|
|
saddr = kzalloc(sizeof(*saddr), GFP_ATOMIC);
|
|
|
|
if (!saddr)
|
|
|
|
return -ENOMEM;
|
|
|
|
saddr->family = AF_TIPC;
|
|
|
|
saddr->addrtype = TIPC_ADDR_NAMESEQ;
|
|
|
|
saddr->addr.nameseq.type = TIPC_TOP_SRV;
|
|
|
|
saddr->addr.nameseq.lower = TIPC_TOP_SRV;
|
|
|
|
saddr->addr.nameseq.upper = TIPC_TOP_SRV;
|
|
|
|
saddr->scope = TIPC_NODE_SCOPE;
|
|
|
|
|
|
|
|
topsrv = kzalloc(sizeof(*topsrv), GFP_ATOMIC);
|
|
|
|
if (!topsrv) {
|
|
|
|
kfree(saddr);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
topsrv->net = net;
|
|
|
|
topsrv->saddr = saddr;
|
|
|
|
topsrv->imp = TIPC_CRITICAL_IMPORTANCE;
|
|
|
|
topsrv->type = SOCK_SEQPACKET;
|
|
|
|
topsrv->max_rcvbuf_size = sizeof(struct tipc_subscr);
|
2015-05-04 02:36:44 +00:00
|
|
|
topsrv->tipc_conn_recvmsg = tipc_subscrb_rcv_cb;
|
|
|
|
topsrv->tipc_conn_new = tipc_subscrb_connect_cb;
|
2016-04-12 11:05:21 +00:00
|
|
|
topsrv->tipc_conn_release = tipc_subscrb_release_cb;
|
2015-01-09 07:27:11 +00:00
|
|
|
|
|
|
|
strncpy(topsrv->name, name, strlen(name) + 1);
|
|
|
|
tn->topsrv = topsrv;
|
|
|
|
atomic_set(&tn->subscription_count, 0);
|
|
|
|
|
|
|
|
return tipc_server_start(topsrv);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|
|
|
|
|
2015-05-04 02:36:44 +00:00
|
|
|
void tipc_topsrv_stop(struct net *net)
|
2006-01-02 18:04:38 +00:00
|
|
|
{
|
2015-01-09 07:27:11 +00:00
|
|
|
struct tipc_net *tn = net_generic(net, tipc_net_id);
|
|
|
|
struct tipc_server *topsrv = tn->topsrv;
|
|
|
|
|
|
|
|
tipc_server_stop(topsrv);
|
|
|
|
kfree(topsrv->saddr);
|
|
|
|
kfree(topsrv);
|
2006-01-02 18:04:38 +00:00
|
|
|
}
|