Merge branch 'for-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup

Pull cgroup changes from Tejun Heo:
 "Mostly changes to get the v2 interface ready.  The core features are
  mostly ready now and I think it's reasonable to expect to drop the
  devel mask in one or two devel cycles at least for a subset of
  controllers.

   - cgroup added a controller dependency mechanism so that block cgroup
     can depend on memory cgroup.  This will be used to finally support
     IO provisioning on the writeback traffic, which is currently being
     implemented.

   - The v2 interface now uses a separate table so that the interface
     files for the new interface are explicitly declared in one place.
     Each controller will explicitly review and add the files for the
     new interface.

   - cpuset is getting ready for the hierarchical behavior which is in
     the similar style with other controllers so that an ancestor's
     configuration change doesn't change the descendants' configurations
     irreversibly and processes aren't silently migrated when a CPU or
     node goes down.

  All the changes are to the new interface and no behavior changed for
  the multiple hierarchies"

* 'for-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (29 commits)
  cpuset: fix the WARN_ON() in update_nodemasks_hier()
  cgroup: initialize cgrp_dfl_root_inhibit_ss_mask from !->dfl_files test
  cgroup: make CFTYPE_ONLY_ON_DFL and CFTYPE_NO_ internal to cgroup core
  cgroup: distinguish the default and legacy hierarchies when handling cftypes
  cgroup: replace cgroup_add_cftypes() with cgroup_add_legacy_cftypes()
  cgroup: rename cgroup_subsys->base_cftypes to ->legacy_cftypes
  cgroup: split cgroup_base_files[] into cgroup_{dfl|legacy}_base_files[]
  cpuset: export effective masks to userspace
  cpuset: allow writing offlined masks to cpuset.cpus/mems
  cpuset: enable onlined cpu/node in effective masks
  cpuset: refactor cpuset_hotplug_update_tasks()
  cpuset: make cs->{cpus, mems}_allowed as user-configured masks
  cpuset: apply cs->effective_{cpus,mems}
  cpuset: initialize top_cpuset's configured masks at mount
  cpuset: use effective cpumask to build sched domains
  cpuset: inherit ancestor's masks if effective_{cpus, mems} becomes empty
  cpuset: update cs->effective_{cpus, mems} when config changes
  cpuset: update cpuset->effective_{cpus,mems} at hotplug
  cpuset: add cs->effective_cpus and cs->effective_mems
  cgroup: clean up sane_behavior handling
  ...
This commit is contained in:
Linus Torvalds 2014-08-04 10:11:28 -07:00
commit 47dfe4037e
16 changed files with 814 additions and 444 deletions

View File

@ -599,6 +599,20 @@ fork. If this method returns 0 (success) then this should remain valid
while the caller holds cgroup_mutex and it is ensured that either while the caller holds cgroup_mutex and it is ensured that either
attach() or cancel_attach() will be called in future. attach() or cancel_attach() will be called in future.
void css_reset(struct cgroup_subsys_state *css)
(cgroup_mutex held by caller)
An optional operation which should restore @css's configuration to the
initial state. This is currently only used on the unified hierarchy
when a subsystem is disabled on a cgroup through
"cgroup.subtree_control" but should remain enabled because other
subsystems depend on it. cgroup core makes such a css invisible by
removing the associated interface files and invokes this callback so
that the hidden subsystem can return to the initial neutral state.
This prevents unexpected resource control from a hidden css and
ensures that the configuration is in the initial state when it is made
visible again later.
void cancel_attach(struct cgroup *cgrp, struct cgroup_taskset *tset) void cancel_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
(cgroup_mutex held by caller) (cgroup_mutex held by caller)

View File

@ -94,12 +94,35 @@ change soon.
mount -t cgroup -o __DEVEL__sane_behavior cgroup $MOUNT_POINT mount -t cgroup -o __DEVEL__sane_behavior cgroup $MOUNT_POINT
All controllers which are not bound to other hierarchies are All controllers which support the unified hierarchy and are not bound
automatically bound to unified hierarchy and show up at the root of to other hierarchies are automatically bound to unified hierarchy and
it. Controllers which are enabled only in the root of unified show up at the root of it. Controllers which are enabled only in the
hierarchy can be bound to other hierarchies at any time. This allows root of unified hierarchy can be bound to other hierarchies. This
mixing unified hierarchy with the traditional multiple hierarchies in allows mixing unified hierarchy with the traditional multiple
a fully backward compatible way. hierarchies in a fully backward compatible way.
For development purposes, the following boot parameter makes all
controllers to appear on the unified hierarchy whether supported or
not.
cgroup__DEVEL__legacy_files_on_dfl
A controller can be moved across hierarchies only after the controller
is no longer referenced in its current hierarchy. Because per-cgroup
controller states are destroyed asynchronously and controllers may
have lingering references, a controller may not show up immediately on
the unified hierarchy after the final umount of the previous
hierarchy. Similarly, a controller should be fully disabled to be
moved out of the unified hierarchy and it may take some time for the
disabled controller to become available for other hierarchies;
furthermore, due to dependencies among controllers, other controllers
may need to be disabled too.
While useful for development and manual configurations, dynamically
moving controllers between the unified and other hierarchies is
strongly discouraged for production use. It is recommended to decide
the hierarchies and controller associations before starting using the
controllers.
2-2. cgroup.subtree_control 2-2. cgroup.subtree_control

View File

@ -928,7 +928,15 @@ struct cgroup_subsys blkio_cgrp_subsys = {
.css_offline = blkcg_css_offline, .css_offline = blkcg_css_offline,
.css_free = blkcg_css_free, .css_free = blkcg_css_free,
.can_attach = blkcg_can_attach, .can_attach = blkcg_can_attach,
.base_cftypes = blkcg_files, .legacy_cftypes = blkcg_files,
#ifdef CONFIG_MEMCG
/*
* This ensures that, if available, memcg is automatically enabled
* together on the default hierarchy so that the owner cgroup can
* be retrieved from writeback pages.
*/
.depends_on = 1 << memory_cgrp_id,
#endif
}; };
EXPORT_SYMBOL_GPL(blkio_cgrp_subsys); EXPORT_SYMBOL_GPL(blkio_cgrp_subsys);
@ -1120,7 +1128,8 @@ int blkcg_policy_register(struct blkcg_policy *pol)
/* everything is in place, add intf files for the new policy */ /* everything is in place, add intf files for the new policy */
if (pol->cftypes) if (pol->cftypes)
WARN_ON(cgroup_add_cftypes(&blkio_cgrp_subsys, pol->cftypes)); WARN_ON(cgroup_add_legacy_cftypes(&blkio_cgrp_subsys,
pol->cftypes));
ret = 0; ret = 0;
out_unlock: out_unlock:
mutex_unlock(&blkcg_pol_mutex); mutex_unlock(&blkcg_pol_mutex);

View File

@ -412,13 +412,13 @@ static void throtl_pd_init(struct blkcg_gq *blkg)
int rw; int rw;
/* /*
* If sane_hierarchy is enabled, we switch to properly hierarchical * If on the default hierarchy, we switch to properly hierarchical
* behavior where limits on a given throtl_grp are applied to the * behavior where limits on a given throtl_grp are applied to the
* whole subtree rather than just the group itself. e.g. If 16M * whole subtree rather than just the group itself. e.g. If 16M
* read_bps limit is set on the root group, the whole system can't * read_bps limit is set on the root group, the whole system can't
* exceed 16M for the device. * exceed 16M for the device.
* *
* If sane_hierarchy is not enabled, the broken flat hierarchy * If not on the default hierarchy, the broken flat hierarchy
* behavior is retained where all throtl_grps are treated as if * behavior is retained where all throtl_grps are treated as if
* they're all separate root groups right below throtl_data. * they're all separate root groups right below throtl_data.
* Limits of a group don't interact with limits of other groups * Limits of a group don't interact with limits of other groups
@ -426,7 +426,7 @@ static void throtl_pd_init(struct blkcg_gq *blkg)
*/ */
parent_sq = &td->service_queue; parent_sq = &td->service_queue;
if (cgroup_sane_behavior(blkg->blkcg->css.cgroup) && blkg->parent) if (cgroup_on_dfl(blkg->blkcg->css.cgroup) && blkg->parent)
parent_sq = &blkg_to_tg(blkg->parent)->service_queue; parent_sq = &blkg_to_tg(blkg->parent)->service_queue;
throtl_service_queue_init(&tg->service_queue, parent_sq); throtl_service_queue_init(&tg->service_queue, parent_sq);

View File

@ -203,7 +203,15 @@ struct cgroup {
struct kernfs_node *kn; /* cgroup kernfs entry */ struct kernfs_node *kn; /* cgroup kernfs entry */
struct kernfs_node *populated_kn; /* kn for "cgroup.subtree_populated" */ struct kernfs_node *populated_kn; /* kn for "cgroup.subtree_populated" */
/* the bitmask of subsystems enabled on the child cgroups */ /*
* The bitmask of subsystems enabled on the child cgroups.
* ->subtree_control is the one configured through
* "cgroup.subtree_control" while ->child_subsys_mask is the
* effective one which may have more subsystems enabled.
* Controller knobs are made available iff it's enabled in
* ->subtree_control.
*/
unsigned int subtree_control;
unsigned int child_subsys_mask; unsigned int child_subsys_mask;
/* Private pointers for each registered subsystem */ /* Private pointers for each registered subsystem */
@ -248,73 +256,9 @@ struct cgroup {
/* cgroup_root->flags */ /* cgroup_root->flags */
enum { enum {
/* CGRP_ROOT_SANE_BEHAVIOR = (1 << 0), /* __DEVEL__sane_behavior specified */
* Unfortunately, cgroup core and various controllers are riddled
* with idiosyncrasies and pointless options. The following flag,
* when set, will force sane behavior - some options are forced on,
* others are disallowed, and some controllers will change their
* hierarchical or other behaviors.
*
* The set of behaviors affected by this flag are still being
* determined and developed and the mount option for this flag is
* prefixed with __DEVEL__. The prefix will be dropped once we
* reach the point where all behaviors are compatible with the
* planned unified hierarchy, which will automatically turn on this
* flag.
*
* The followings are the behaviors currently affected this flag.
*
* - Mount options "noprefix", "xattr", "clone_children",
* "release_agent" and "name" are disallowed.
*
* - When mounting an existing superblock, mount options should
* match.
*
* - Remount is disallowed.
*
* - rename(2) is disallowed.
*
* - "tasks" is removed. Everything should be at process
* granularity. Use "cgroup.procs" instead.
*
* - "cgroup.procs" is not sorted. pids will be unique unless they
* got recycled inbetween reads.
*
* - "release_agent" and "notify_on_release" are removed.
* Replacement notification mechanism will be implemented.
*
* - "cgroup.clone_children" is removed.
*
* - "cgroup.subtree_populated" is available. Its value is 0 if
* the cgroup and its descendants contain no task; otherwise, 1.
* The file also generates kernfs notification which can be
* monitored through poll and [di]notify when the value of the
* file changes.
*
* - If mount is requested with sane_behavior but without any
* subsystem, the default unified hierarchy is mounted.
*
* - cpuset: tasks will be kept in empty cpusets when hotplug happens
* and take masks of ancestors with non-empty cpus/mems, instead of
* being moved to an ancestor.
*
* - cpuset: a task can be moved into an empty cpuset, and again it
* takes masks of ancestors.
*
* - memcg: use_hierarchy is on by default and the cgroup file for
* the flag is not created.
*
* - blkcg: blk-throttle becomes properly hierarchical.
*
* - debug: disallowed on the default hierarchy.
*/
CGRP_ROOT_SANE_BEHAVIOR = (1 << 0),
CGRP_ROOT_NOPREFIX = (1 << 1), /* mounted subsystems have no named prefix */ CGRP_ROOT_NOPREFIX = (1 << 1), /* mounted subsystems have no named prefix */
CGRP_ROOT_XATTR = (1 << 2), /* supports extended attributes */ CGRP_ROOT_XATTR = (1 << 2), /* supports extended attributes */
/* mount options live below bit 16 */
CGRP_ROOT_OPTION_MASK = (1 << 16) - 1,
}; };
/* /*
@ -440,9 +384,11 @@ struct css_set {
enum { enum {
CFTYPE_ONLY_ON_ROOT = (1 << 0), /* only create on root cgrp */ CFTYPE_ONLY_ON_ROOT = (1 << 0), /* only create on root cgrp */
CFTYPE_NOT_ON_ROOT = (1 << 1), /* don't create on root cgrp */ CFTYPE_NOT_ON_ROOT = (1 << 1), /* don't create on root cgrp */
CFTYPE_INSANE = (1 << 2), /* don't create if sane_behavior */
CFTYPE_NO_PREFIX = (1 << 3), /* (DON'T USE FOR NEW FILES) no subsys prefix */ CFTYPE_NO_PREFIX = (1 << 3), /* (DON'T USE FOR NEW FILES) no subsys prefix */
CFTYPE_ONLY_ON_DFL = (1 << 4), /* only on default hierarchy */
/* internal flags, do not use outside cgroup core proper */
__CFTYPE_ONLY_ON_DFL = (1 << 16), /* only on default hierarchy */
__CFTYPE_NOT_ON_DFL = (1 << 17), /* not on default hierarchy */
}; };
#define MAX_CFTYPE_NAME 64 #define MAX_CFTYPE_NAME 64
@ -526,20 +472,64 @@ struct cftype {
extern struct cgroup_root cgrp_dfl_root; extern struct cgroup_root cgrp_dfl_root;
extern struct css_set init_css_set; extern struct css_set init_css_set;
/**
* cgroup_on_dfl - test whether a cgroup is on the default hierarchy
* @cgrp: the cgroup of interest
*
* The default hierarchy is the v2 interface of cgroup and this function
* can be used to test whether a cgroup is on the default hierarchy for
* cases where a subsystem should behave differnetly depending on the
* interface version.
*
* The set of behaviors which change on the default hierarchy are still
* being determined and the mount option is prefixed with __DEVEL__.
*
* List of changed behaviors:
*
* - Mount options "noprefix", "xattr", "clone_children", "release_agent"
* and "name" are disallowed.
*
* - When mounting an existing superblock, mount options should match.
*
* - Remount is disallowed.
*
* - rename(2) is disallowed.
*
* - "tasks" is removed. Everything should be at process granularity. Use
* "cgroup.procs" instead.
*
* - "cgroup.procs" is not sorted. pids will be unique unless they got
* recycled inbetween reads.
*
* - "release_agent" and "notify_on_release" are removed. Replacement
* notification mechanism will be implemented.
*
* - "cgroup.clone_children" is removed.
*
* - "cgroup.subtree_populated" is available. Its value is 0 if the cgroup
* and its descendants contain no task; otherwise, 1. The file also
* generates kernfs notification which can be monitored through poll and
* [di]notify when the value of the file changes.
*
* - cpuset: tasks will be kept in empty cpusets when hotplug happens and
* take masks of ancestors with non-empty cpus/mems, instead of being
* moved to an ancestor.
*
* - cpuset: a task can be moved into an empty cpuset, and again it takes
* masks of ancestors.
*
* - memcg: use_hierarchy is on by default and the cgroup file for the flag
* is not created.
*
* - blkcg: blk-throttle becomes properly hierarchical.
*
* - debug: disallowed on the default hierarchy.
*/
static inline bool cgroup_on_dfl(const struct cgroup *cgrp) static inline bool cgroup_on_dfl(const struct cgroup *cgrp)
{ {
return cgrp->root == &cgrp_dfl_root; return cgrp->root == &cgrp_dfl_root;
} }
/*
* See the comment above CGRP_ROOT_SANE_BEHAVIOR for details. This
* function can be called as long as @cgrp is accessible.
*/
static inline bool cgroup_sane_behavior(const struct cgroup *cgrp)
{
return cgrp->root->flags & CGRP_ROOT_SANE_BEHAVIOR;
}
/* no synchronization, the result can only be used as a hint */ /* no synchronization, the result can only be used as a hint */
static inline bool cgroup_has_tasks(struct cgroup *cgrp) static inline bool cgroup_has_tasks(struct cgroup *cgrp)
{ {
@ -602,7 +592,8 @@ static inline void pr_cont_cgroup_path(struct cgroup *cgrp)
char *task_cgroup_path(struct task_struct *task, char *buf, size_t buflen); char *task_cgroup_path(struct task_struct *task, char *buf, size_t buflen);
int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts); int cgroup_add_dfl_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);
int cgroup_add_legacy_cftypes(struct cgroup_subsys *ss, struct cftype *cfts);
int cgroup_rm_cftypes(struct cftype *cfts); int cgroup_rm_cftypes(struct cftype *cfts);
bool cgroup_is_descendant(struct cgroup *cgrp, struct cgroup *ancestor); bool cgroup_is_descendant(struct cgroup *cgrp, struct cgroup *ancestor);
@ -634,6 +625,7 @@ struct cgroup_subsys {
int (*css_online)(struct cgroup_subsys_state *css); int (*css_online)(struct cgroup_subsys_state *css);
void (*css_offline)(struct cgroup_subsys_state *css); void (*css_offline)(struct cgroup_subsys_state *css);
void (*css_free)(struct cgroup_subsys_state *css); void (*css_free)(struct cgroup_subsys_state *css);
void (*css_reset)(struct cgroup_subsys_state *css);
int (*can_attach)(struct cgroup_subsys_state *css, int (*can_attach)(struct cgroup_subsys_state *css,
struct cgroup_taskset *tset); struct cgroup_taskset *tset);
@ -682,8 +674,21 @@ struct cgroup_subsys {
*/ */
struct list_head cfts; struct list_head cfts;
/* base cftypes, automatically registered with subsys itself */ /*
struct cftype *base_cftypes; * Base cftypes which are automatically registered. The two can
* point to the same array.
*/
struct cftype *dfl_cftypes; /* for the default hierarchy */
struct cftype *legacy_cftypes; /* for the legacy hierarchies */
/*
* A subsystem may depend on other subsystems. When such subsystem
* is enabled on a cgroup, the depended-upon subsystems are enabled
* together if available. Subsystems enabled due to dependency are
* not visible to userland until explicitly enabled. The following
* specifies the mask of subsystems that this one depends on.
*/
unsigned int depends_on;
}; };
#define SUBSYS(_x) extern struct cgroup_subsys _x ## _cgrp_subsys; #define SUBSYS(_x) extern struct cgroup_subsys _x ## _cgrp_subsys;

View File

@ -149,12 +149,14 @@ struct cgroup_root cgrp_dfl_root;
*/ */
static bool cgrp_dfl_root_visible; static bool cgrp_dfl_root_visible;
/*
* Set by the boot param of the same name and makes subsystems with NULL
* ->dfl_files to use ->legacy_files on the default hierarchy.
*/
static bool cgroup_legacy_files_on_dfl;
/* some controllers are not supported in the default hierarchy */ /* some controllers are not supported in the default hierarchy */
static const unsigned int cgrp_dfl_root_inhibit_ss_mask = 0 static unsigned int cgrp_dfl_root_inhibit_ss_mask;
#ifdef CONFIG_CGROUP_DEBUG
| (1 << debug_cgrp_id)
#endif
;
/* The list of hierarchy roots */ /* The list of hierarchy roots */
@ -180,13 +182,15 @@ static u64 css_serial_nr_next = 1;
*/ */
static int need_forkexit_callback __read_mostly; static int need_forkexit_callback __read_mostly;
static struct cftype cgroup_base_files[]; static struct cftype cgroup_dfl_base_files[];
static struct cftype cgroup_legacy_base_files[];
static void cgroup_put(struct cgroup *cgrp); static void cgroup_put(struct cgroup *cgrp);
static int rebind_subsystems(struct cgroup_root *dst_root, static int rebind_subsystems(struct cgroup_root *dst_root,
unsigned int ss_mask); unsigned int ss_mask);
static int cgroup_destroy_locked(struct cgroup *cgrp); static int cgroup_destroy_locked(struct cgroup *cgrp);
static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss); static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss,
bool visible);
static void css_release(struct percpu_ref *ref); static void css_release(struct percpu_ref *ref);
static void kill_css(struct cgroup_subsys_state *css); static void kill_css(struct cgroup_subsys_state *css);
static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[], static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
@ -1036,6 +1040,58 @@ static void cgroup_put(struct cgroup *cgrp)
css_put(&cgrp->self); css_put(&cgrp->self);
} }
/**
* cgroup_refresh_child_subsys_mask - update child_subsys_mask
* @cgrp: the target cgroup
*
* On the default hierarchy, a subsystem may request other subsystems to be
* enabled together through its ->depends_on mask. In such cases, more
* subsystems than specified in "cgroup.subtree_control" may be enabled.
*
* This function determines which subsystems need to be enabled given the
* current @cgrp->subtree_control and records it in
* @cgrp->child_subsys_mask. The resulting mask is always a superset of
* @cgrp->subtree_control and follows the usual hierarchy rules.
*/
static void cgroup_refresh_child_subsys_mask(struct cgroup *cgrp)
{
struct cgroup *parent = cgroup_parent(cgrp);
unsigned int cur_ss_mask = cgrp->subtree_control;
struct cgroup_subsys *ss;
int ssid;
lockdep_assert_held(&cgroup_mutex);
if (!cgroup_on_dfl(cgrp)) {
cgrp->child_subsys_mask = cur_ss_mask;
return;
}
while (true) {
unsigned int new_ss_mask = cur_ss_mask;
for_each_subsys(ss, ssid)
if (cur_ss_mask & (1 << ssid))
new_ss_mask |= ss->depends_on;
/*
* Mask out subsystems which aren't available. This can
* happen only if some depended-upon subsystems were bound
* to non-default hierarchies.
*/
if (parent)
new_ss_mask &= parent->child_subsys_mask;
else
new_ss_mask &= cgrp->root->subsys_mask;
if (new_ss_mask == cur_ss_mask)
break;
cur_ss_mask = new_ss_mask;
}
cgrp->child_subsys_mask = cur_ss_mask;
}
/** /**
* cgroup_kn_unlock - unlocking helper for cgroup kernfs methods * cgroup_kn_unlock - unlocking helper for cgroup kernfs methods
* @kn: the kernfs_node being serviced * @kn: the kernfs_node being serviced
@ -1208,12 +1264,15 @@ static int rebind_subsystems(struct cgroup_root *dst_root, unsigned int ss_mask)
up_write(&css_set_rwsem); up_write(&css_set_rwsem);
src_root->subsys_mask &= ~(1 << ssid); src_root->subsys_mask &= ~(1 << ssid);
src_root->cgrp.child_subsys_mask &= ~(1 << ssid); src_root->cgrp.subtree_control &= ~(1 << ssid);
cgroup_refresh_child_subsys_mask(&src_root->cgrp);
/* default hierarchy doesn't enable controllers by default */ /* default hierarchy doesn't enable controllers by default */
dst_root->subsys_mask |= 1 << ssid; dst_root->subsys_mask |= 1 << ssid;
if (dst_root != &cgrp_dfl_root) if (dst_root != &cgrp_dfl_root) {
dst_root->cgrp.child_subsys_mask |= 1 << ssid; dst_root->cgrp.subtree_control |= 1 << ssid;
cgroup_refresh_child_subsys_mask(&dst_root->cgrp);
}
if (ss->bind) if (ss->bind)
ss->bind(css); ss->bind(css);
@ -1233,8 +1292,6 @@ static int cgroup_show_options(struct seq_file *seq,
for_each_subsys(ss, ssid) for_each_subsys(ss, ssid)
if (root->subsys_mask & (1 << ssid)) if (root->subsys_mask & (1 << ssid))
seq_printf(seq, ",%s", ss->name); seq_printf(seq, ",%s", ss->name);
if (root->flags & CGRP_ROOT_SANE_BEHAVIOR)
seq_puts(seq, ",sane_behavior");
if (root->flags & CGRP_ROOT_NOPREFIX) if (root->flags & CGRP_ROOT_NOPREFIX)
seq_puts(seq, ",noprefix"); seq_puts(seq, ",noprefix");
if (root->flags & CGRP_ROOT_XATTR) if (root->flags & CGRP_ROOT_XATTR)
@ -1268,6 +1325,7 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
bool all_ss = false, one_ss = false; bool all_ss = false, one_ss = false;
unsigned int mask = -1U; unsigned int mask = -1U;
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
int nr_opts = 0;
int i; int i;
#ifdef CONFIG_CPUSETS #ifdef CONFIG_CPUSETS
@ -1277,6 +1335,8 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
memset(opts, 0, sizeof(*opts)); memset(opts, 0, sizeof(*opts));
while ((token = strsep(&o, ",")) != NULL) { while ((token = strsep(&o, ",")) != NULL) {
nr_opts++;
if (!*token) if (!*token)
return -EINVAL; return -EINVAL;
if (!strcmp(token, "none")) { if (!strcmp(token, "none")) {
@ -1361,36 +1421,32 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
return -ENOENT; return -ENOENT;
} }
/* Consistency checks */
if (opts->flags & CGRP_ROOT_SANE_BEHAVIOR) { if (opts->flags & CGRP_ROOT_SANE_BEHAVIOR) {
pr_warn("sane_behavior: this is still under development and its behaviors will change, proceed at your own risk\n"); pr_warn("sane_behavior: this is still under development and its behaviors will change, proceed at your own risk\n");
if (nr_opts != 1) {
if ((opts->flags & (CGRP_ROOT_NOPREFIX | CGRP_ROOT_XATTR)) || pr_err("sane_behavior: no other mount options allowed\n");
opts->cpuset_clone_children || opts->release_agent ||
opts->name) {
pr_err("sane_behavior: noprefix, xattr, clone_children, release_agent and name are not allowed\n");
return -EINVAL; return -EINVAL;
} }
} else { return 0;
/*
* If the 'all' option was specified select all the
* subsystems, otherwise if 'none', 'name=' and a subsystem
* name options were not specified, let's default to 'all'
*/
if (all_ss || (!one_ss && !opts->none && !opts->name))
for_each_subsys(ss, i)
if (!ss->disabled)
opts->subsys_mask |= (1 << i);
/*
* We either have to specify by name or by subsystems. (So
* all empty hierarchies must have a name).
*/
if (!opts->subsys_mask && !opts->name)
return -EINVAL;
} }
/*
* If the 'all' option was specified select all the subsystems,
* otherwise if 'none', 'name=' and a subsystem name options were
* not specified, let's default to 'all'
*/
if (all_ss || (!one_ss && !opts->none && !opts->name))
for_each_subsys(ss, i)
if (!ss->disabled)
opts->subsys_mask |= (1 << i);
/*
* We either have to specify by name or by subsystems. (So all
* empty hierarchies must have a name).
*/
if (!opts->subsys_mask && !opts->name)
return -EINVAL;
/* /*
* Option noprefix was introduced just for backward compatibility * Option noprefix was introduced just for backward compatibility
* with the old cpuset, so we allow noprefix only if mounting just * with the old cpuset, so we allow noprefix only if mounting just
@ -1399,7 +1455,6 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
if ((opts->flags & CGRP_ROOT_NOPREFIX) && (opts->subsys_mask & mask)) if ((opts->flags & CGRP_ROOT_NOPREFIX) && (opts->subsys_mask & mask))
return -EINVAL; return -EINVAL;
/* Can't specify "none" and some subsystems */ /* Can't specify "none" and some subsystems */
if (opts->subsys_mask && opts->none) if (opts->subsys_mask && opts->none)
return -EINVAL; return -EINVAL;
@ -1414,8 +1469,8 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data)
struct cgroup_sb_opts opts; struct cgroup_sb_opts opts;
unsigned int added_mask, removed_mask; unsigned int added_mask, removed_mask;
if (root->flags & CGRP_ROOT_SANE_BEHAVIOR) { if (root == &cgrp_dfl_root) {
pr_err("sane_behavior: remount is not allowed\n"); pr_err("remount is not allowed\n");
return -EINVAL; return -EINVAL;
} }
@ -1434,11 +1489,10 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data)
removed_mask = root->subsys_mask & ~opts.subsys_mask; removed_mask = root->subsys_mask & ~opts.subsys_mask;
/* Don't allow flags or name to change at remount */ /* Don't allow flags or name to change at remount */
if (((opts.flags ^ root->flags) & CGRP_ROOT_OPTION_MASK) || if ((opts.flags ^ root->flags) ||
(opts.name && strcmp(opts.name, root->name))) { (opts.name && strcmp(opts.name, root->name))) {
pr_err("option or name mismatch, new: 0x%x \"%s\", old: 0x%x \"%s\"\n", pr_err("option or name mismatch, new: 0x%x \"%s\", old: 0x%x \"%s\"\n",
opts.flags & CGRP_ROOT_OPTION_MASK, opts.name ?: "", opts.flags, opts.name ?: "", root->flags, root->name);
root->flags & CGRP_ROOT_OPTION_MASK, root->name);
ret = -EINVAL; ret = -EINVAL;
goto out_unlock; goto out_unlock;
} }
@ -1563,6 +1617,7 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned int ss_mask)
{ {
LIST_HEAD(tmp_links); LIST_HEAD(tmp_links);
struct cgroup *root_cgrp = &root->cgrp; struct cgroup *root_cgrp = &root->cgrp;
struct cftype *base_files;
struct css_set *cset; struct css_set *cset;
int i, ret; int i, ret;
@ -1600,7 +1655,12 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned int ss_mask)
} }
root_cgrp->kn = root->kf_root->kn; root_cgrp->kn = root->kf_root->kn;
ret = cgroup_addrm_files(root_cgrp, cgroup_base_files, true); if (root == &cgrp_dfl_root)
base_files = cgroup_dfl_base_files;
else
base_files = cgroup_legacy_base_files;
ret = cgroup_addrm_files(root_cgrp, base_files, true);
if (ret) if (ret)
goto destroy_root; goto destroy_root;
@ -1672,7 +1732,7 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type,
goto out_unlock; goto out_unlock;
/* look for a matching existing root */ /* look for a matching existing root */
if (!opts.subsys_mask && !opts.none && !opts.name) { if (opts.flags & CGRP_ROOT_SANE_BEHAVIOR) {
cgrp_dfl_root_visible = true; cgrp_dfl_root_visible = true;
root = &cgrp_dfl_root; root = &cgrp_dfl_root;
cgroup_get(&root->cgrp); cgroup_get(&root->cgrp);
@ -1730,15 +1790,8 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type,
goto out_unlock; goto out_unlock;
} }
if ((root->flags ^ opts.flags) & CGRP_ROOT_OPTION_MASK) { if (root->flags ^ opts.flags)
if ((root->flags | opts.flags) & CGRP_ROOT_SANE_BEHAVIOR) { pr_warn("new mount options do not match the existing superblock, will be ignored\n");
pr_err("sane_behavior: new mount options should match the existing superblock\n");
ret = -EINVAL;
goto out_unlock;
} else {
pr_warn("new mount options do not match the existing superblock, will be ignored\n");
}
}
/* /*
* We want to reuse @root whose lifetime is governed by its * We want to reuse @root whose lifetime is governed by its
@ -2457,9 +2510,7 @@ static int cgroup_release_agent_show(struct seq_file *seq, void *v)
static int cgroup_sane_behavior_show(struct seq_file *seq, void *v) static int cgroup_sane_behavior_show(struct seq_file *seq, void *v)
{ {
struct cgroup *cgrp = seq_css(seq)->cgroup; seq_puts(seq, "0\n");
seq_printf(seq, "%d\n", cgroup_sane_behavior(cgrp));
return 0; return 0;
} }
@ -2496,7 +2547,7 @@ static int cgroup_controllers_show(struct seq_file *seq, void *v)
{ {
struct cgroup *cgrp = seq_css(seq)->cgroup; struct cgroup *cgrp = seq_css(seq)->cgroup;
cgroup_print_ss_mask(seq, cgroup_parent(cgrp)->child_subsys_mask); cgroup_print_ss_mask(seq, cgroup_parent(cgrp)->subtree_control);
return 0; return 0;
} }
@ -2505,7 +2556,7 @@ static int cgroup_subtree_control_show(struct seq_file *seq, void *v)
{ {
struct cgroup *cgrp = seq_css(seq)->cgroup; struct cgroup *cgrp = seq_css(seq)->cgroup;
cgroup_print_ss_mask(seq, cgrp->child_subsys_mask); cgroup_print_ss_mask(seq, cgrp->subtree_control);
return 0; return 0;
} }
@ -2611,6 +2662,7 @@ static ssize_t cgroup_subtree_control_write(struct kernfs_open_file *of,
loff_t off) loff_t off)
{ {
unsigned int enable = 0, disable = 0; unsigned int enable = 0, disable = 0;
unsigned int css_enable, css_disable, old_ctrl, new_ctrl;
struct cgroup *cgrp, *child; struct cgroup *cgrp, *child;
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
char *tok; char *tok;
@ -2650,11 +2702,26 @@ static ssize_t cgroup_subtree_control_write(struct kernfs_open_file *of,
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (enable & (1 << ssid)) { if (enable & (1 << ssid)) {
if (cgrp->child_subsys_mask & (1 << ssid)) { if (cgrp->subtree_control & (1 << ssid)) {
enable &= ~(1 << ssid); enable &= ~(1 << ssid);
continue; continue;
} }
/* unavailable or not enabled on the parent? */
if (!(cgrp_dfl_root.subsys_mask & (1 << ssid)) ||
(cgroup_parent(cgrp) &&
!(cgroup_parent(cgrp)->subtree_control & (1 << ssid)))) {
ret = -ENOENT;
goto out_unlock;
}
/*
* @ss is already enabled through dependency and
* we'll just make it visible. Skip draining.
*/
if (cgrp->child_subsys_mask & (1 << ssid))
continue;
/* /*
* Because css offlining is asynchronous, userland * Because css offlining is asynchronous, userland
* might try to re-enable the same controller while * might try to re-enable the same controller while
@ -2677,23 +2744,15 @@ static ssize_t cgroup_subtree_control_write(struct kernfs_open_file *of,
return restart_syscall(); return restart_syscall();
} }
/* unavailable or not enabled on the parent? */
if (!(cgrp_dfl_root.subsys_mask & (1 << ssid)) ||
(cgroup_parent(cgrp) &&
!(cgroup_parent(cgrp)->child_subsys_mask & (1 << ssid)))) {
ret = -ENOENT;
goto out_unlock;
}
} else if (disable & (1 << ssid)) { } else if (disable & (1 << ssid)) {
if (!(cgrp->child_subsys_mask & (1 << ssid))) { if (!(cgrp->subtree_control & (1 << ssid))) {
disable &= ~(1 << ssid); disable &= ~(1 << ssid);
continue; continue;
} }
/* a child has it enabled? */ /* a child has it enabled? */
cgroup_for_each_live_child(child, cgrp) { cgroup_for_each_live_child(child, cgrp) {
if (child->child_subsys_mask & (1 << ssid)) { if (child->subtree_control & (1 << ssid)) {
ret = -EBUSY; ret = -EBUSY;
goto out_unlock; goto out_unlock;
} }
@ -2707,7 +2766,7 @@ static ssize_t cgroup_subtree_control_write(struct kernfs_open_file *of,
} }
/* /*
* Except for the root, child_subsys_mask must be zero for a cgroup * Except for the root, subtree_control must be zero for a cgroup
* with tasks so that child cgroups don't compete against tasks. * with tasks so that child cgroups don't compete against tasks.
*/ */
if (enable && cgroup_parent(cgrp) && !list_empty(&cgrp->cset_links)) { if (enable && cgroup_parent(cgrp) && !list_empty(&cgrp->cset_links)) {
@ -2716,36 +2775,75 @@ static ssize_t cgroup_subtree_control_write(struct kernfs_open_file *of,
} }
/* /*
* Create csses for enables and update child_subsys_mask. This * Update subsys masks and calculate what needs to be done. More
* changes cgroup_e_css() results which in turn makes the * subsystems than specified may need to be enabled or disabled
* subsequent cgroup_update_dfl_csses() associate all tasks in the * depending on subsystem dependencies.
* subtree to the updated csses. */
cgrp->subtree_control |= enable;
cgrp->subtree_control &= ~disable;
old_ctrl = cgrp->child_subsys_mask;
cgroup_refresh_child_subsys_mask(cgrp);
new_ctrl = cgrp->child_subsys_mask;
css_enable = ~old_ctrl & new_ctrl;
css_disable = old_ctrl & ~new_ctrl;
enable |= css_enable;
disable |= css_disable;
/*
* Create new csses or make the existing ones visible. A css is
* created invisible if it's being implicitly enabled through
* dependency. An invisible css is made visible when the userland
* explicitly enables it.
*/ */
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (!(enable & (1 << ssid))) if (!(enable & (1 << ssid)))
continue; continue;
cgroup_for_each_live_child(child, cgrp) { cgroup_for_each_live_child(child, cgrp) {
ret = create_css(child, ss); if (css_enable & (1 << ssid))
ret = create_css(child, ss,
cgrp->subtree_control & (1 << ssid));
else
ret = cgroup_populate_dir(child, 1 << ssid);
if (ret) if (ret)
goto err_undo_css; goto err_undo_css;
} }
} }
cgrp->child_subsys_mask |= enable; /*
cgrp->child_subsys_mask &= ~disable; * At this point, cgroup_e_css() results reflect the new csses
* making the following cgroup_update_dfl_csses() properly update
* css associations of all tasks in the subtree.
*/
ret = cgroup_update_dfl_csses(cgrp); ret = cgroup_update_dfl_csses(cgrp);
if (ret) if (ret)
goto err_undo_css; goto err_undo_css;
/* all tasks are now migrated away from the old csses, kill them */ /*
* All tasks are migrated out of disabled csses. Kill or hide
* them. A css is hidden when the userland requests it to be
* disabled while other subsystems are still depending on it. The
* css must not actively control resources and be in the vanilla
* state if it's made visible again later. Controllers which may
* be depended upon should provide ->css_reset() for this purpose.
*/
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (!(disable & (1 << ssid))) if (!(disable & (1 << ssid)))
continue; continue;
cgroup_for_each_live_child(child, cgrp) cgroup_for_each_live_child(child, cgrp) {
kill_css(cgroup_css(child, ss)); struct cgroup_subsys_state *css = cgroup_css(child, ss);
if (css_disable & (1 << ssid)) {
kill_css(css);
} else {
cgroup_clear_dir(child, 1 << ssid);
if (ss->css_reset)
ss->css_reset(css);
}
}
} }
kernfs_activate(cgrp->kn); kernfs_activate(cgrp->kn);
@ -2755,8 +2853,9 @@ out_unlock:
return ret ?: nbytes; return ret ?: nbytes;
err_undo_css: err_undo_css:
cgrp->child_subsys_mask &= ~enable; cgrp->subtree_control &= ~enable;
cgrp->child_subsys_mask |= disable; cgrp->subtree_control |= disable;
cgroup_refresh_child_subsys_mask(cgrp);
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (!(enable & (1 << ssid))) if (!(enable & (1 << ssid)))
@ -2764,8 +2863,14 @@ err_undo_css:
cgroup_for_each_live_child(child, cgrp) { cgroup_for_each_live_child(child, cgrp) {
struct cgroup_subsys_state *css = cgroup_css(child, ss); struct cgroup_subsys_state *css = cgroup_css(child, ss);
if (css)
if (!css)
continue;
if (css_enable & (1 << ssid))
kill_css(css); kill_css(css);
else
cgroup_clear_dir(child, 1 << ssid);
} }
} }
goto out_unlock; goto out_unlock;
@ -2878,9 +2983,9 @@ static int cgroup_rename(struct kernfs_node *kn, struct kernfs_node *new_parent,
/* /*
* This isn't a proper migration and its usefulness is very * This isn't a proper migration and its usefulness is very
* limited. Disallow if sane_behavior. * limited. Disallow on the default hierarchy.
*/ */
if (cgroup_sane_behavior(cgrp)) if (cgroup_on_dfl(cgrp))
return -EPERM; return -EPERM;
/* /*
@ -2964,9 +3069,9 @@ static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
for (cft = cfts; cft->name[0] != '\0'; cft++) { for (cft = cfts; cft->name[0] != '\0'; cft++) {
/* does cft->flags tell us to skip this file on @cgrp? */ /* does cft->flags tell us to skip this file on @cgrp? */
if ((cft->flags & CFTYPE_ONLY_ON_DFL) && !cgroup_on_dfl(cgrp)) if ((cft->flags & __CFTYPE_ONLY_ON_DFL) && !cgroup_on_dfl(cgrp))
continue; continue;
if ((cft->flags & CFTYPE_INSANE) && cgroup_sane_behavior(cgrp)) if ((cft->flags & __CFTYPE_NOT_ON_DFL) && cgroup_on_dfl(cgrp))
continue; continue;
if ((cft->flags & CFTYPE_NOT_ON_ROOT) && !cgroup_parent(cgrp)) if ((cft->flags & CFTYPE_NOT_ON_ROOT) && !cgroup_parent(cgrp))
continue; continue;
@ -3024,6 +3129,9 @@ static void cgroup_exit_cftypes(struct cftype *cfts)
kfree(cft->kf_ops); kfree(cft->kf_ops);
cft->kf_ops = NULL; cft->kf_ops = NULL;
cft->ss = NULL; cft->ss = NULL;
/* revert flags set by cgroup core while adding @cfts */
cft->flags &= ~(__CFTYPE_ONLY_ON_DFL | __CFTYPE_NOT_ON_DFL);
} }
} }
@ -3109,7 +3217,7 @@ int cgroup_rm_cftypes(struct cftype *cfts)
* function currently returns 0 as long as @cfts registration is successful * function currently returns 0 as long as @cfts registration is successful
* even if some file creation attempts on existing cgroups fail. * even if some file creation attempts on existing cgroups fail.
*/ */
int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts) static int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
{ {
int ret; int ret;
@ -3134,6 +3242,40 @@ int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
return ret; return ret;
} }
/**
* cgroup_add_dfl_cftypes - add an array of cftypes for default hierarchy
* @ss: target cgroup subsystem
* @cfts: zero-length name terminated array of cftypes
*
* Similar to cgroup_add_cftypes() but the added files are only used for
* the default hierarchy.
*/
int cgroup_add_dfl_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
{
struct cftype *cft;
for (cft = cfts; cft && cft->name[0] != '\0'; cft++)
cft->flags |= __CFTYPE_ONLY_ON_DFL;
return cgroup_add_cftypes(ss, cfts);
}
/**
* cgroup_add_legacy_cftypes - add an array of cftypes for legacy hierarchies
* @ss: target cgroup subsystem
* @cfts: zero-length name terminated array of cftypes
*
* Similar to cgroup_add_cftypes() but the added files are only used for
* the legacy hierarchies.
*/
int cgroup_add_legacy_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
{
struct cftype *cft;
for (cft = cfts; cft && cft->name[0] != '\0'; cft++)
cft->flags |= __CFTYPE_NOT_ON_DFL;
return cgroup_add_cftypes(ss, cfts);
}
/** /**
* cgroup_task_count - count the number of tasks in a cgroup. * cgroup_task_count - count the number of tasks in a cgroup.
* @cgrp: the cgroup in question * @cgrp: the cgroup in question
@ -3699,8 +3841,9 @@ after:
* *
* All this extra complexity was caused by the original implementation * All this extra complexity was caused by the original implementation
* committing to an entirely unnecessary property. In the long term, we * committing to an entirely unnecessary property. In the long term, we
* want to do away with it. Explicitly scramble sort order if * want to do away with it. Explicitly scramble sort order if on the
* sane_behavior so that no such expectation exists in the new interface. * default hierarchy so that no such expectation exists in the new
* interface.
* *
* Scrambling is done by swapping every two consecutive bits, which is * Scrambling is done by swapping every two consecutive bits, which is
* non-identity one-to-one mapping which disturbs sort order sufficiently. * non-identity one-to-one mapping which disturbs sort order sufficiently.
@ -3715,7 +3858,7 @@ static pid_t pid_fry(pid_t pid)
static pid_t cgroup_pid_fry(struct cgroup *cgrp, pid_t pid) static pid_t cgroup_pid_fry(struct cgroup *cgrp, pid_t pid)
{ {
if (cgroup_sane_behavior(cgrp)) if (cgroup_on_dfl(cgrp))
return pid_fry(pid); return pid_fry(pid);
else else
return pid; return pid;
@ -3818,7 +3961,7 @@ static int pidlist_array_load(struct cgroup *cgrp, enum cgroup_filetype type,
css_task_iter_end(&it); css_task_iter_end(&it);
length = n; length = n;
/* now sort & (if procs) strip out duplicates */ /* now sort & (if procs) strip out duplicates */
if (cgroup_sane_behavior(cgrp)) if (cgroup_on_dfl(cgrp))
sort(array, length, sizeof(pid_t), fried_cmppid, NULL); sort(array, length, sizeof(pid_t), fried_cmppid, NULL);
else else
sort(array, length, sizeof(pid_t), cmppid, NULL); sort(array, length, sizeof(pid_t), cmppid, NULL);
@ -4040,7 +4183,43 @@ static int cgroup_clone_children_write(struct cgroup_subsys_state *css,
return 0; return 0;
} }
static struct cftype cgroup_base_files[] = { /* cgroup core interface files for the default hierarchy */
static struct cftype cgroup_dfl_base_files[] = {
{
.name = "cgroup.procs",
.seq_start = cgroup_pidlist_start,
.seq_next = cgroup_pidlist_next,
.seq_stop = cgroup_pidlist_stop,
.seq_show = cgroup_pidlist_show,
.private = CGROUP_FILE_PROCS,
.write = cgroup_procs_write,
.mode = S_IRUGO | S_IWUSR,
},
{
.name = "cgroup.controllers",
.flags = CFTYPE_ONLY_ON_ROOT,
.seq_show = cgroup_root_controllers_show,
},
{
.name = "cgroup.controllers",
.flags = CFTYPE_NOT_ON_ROOT,
.seq_show = cgroup_controllers_show,
},
{
.name = "cgroup.subtree_control",
.seq_show = cgroup_subtree_control_show,
.write = cgroup_subtree_control_write,
},
{
.name = "cgroup.populated",
.flags = CFTYPE_NOT_ON_ROOT,
.seq_show = cgroup_populated_show,
},
{ } /* terminate */
};
/* cgroup core interface files for the legacy hierarchies */
static struct cftype cgroup_legacy_base_files[] = {
{ {
.name = "cgroup.procs", .name = "cgroup.procs",
.seq_start = cgroup_pidlist_start, .seq_start = cgroup_pidlist_start,
@ -4053,7 +4232,6 @@ static struct cftype cgroup_base_files[] = {
}, },
{ {
.name = "cgroup.clone_children", .name = "cgroup.clone_children",
.flags = CFTYPE_INSANE,
.read_u64 = cgroup_clone_children_read, .read_u64 = cgroup_clone_children_read,
.write_u64 = cgroup_clone_children_write, .write_u64 = cgroup_clone_children_write,
}, },
@ -4062,36 +4240,8 @@ static struct cftype cgroup_base_files[] = {
.flags = CFTYPE_ONLY_ON_ROOT, .flags = CFTYPE_ONLY_ON_ROOT,
.seq_show = cgroup_sane_behavior_show, .seq_show = cgroup_sane_behavior_show,
}, },
{
.name = "cgroup.controllers",
.flags = CFTYPE_ONLY_ON_DFL | CFTYPE_ONLY_ON_ROOT,
.seq_show = cgroup_root_controllers_show,
},
{
.name = "cgroup.controllers",
.flags = CFTYPE_ONLY_ON_DFL | CFTYPE_NOT_ON_ROOT,
.seq_show = cgroup_controllers_show,
},
{
.name = "cgroup.subtree_control",
.flags = CFTYPE_ONLY_ON_DFL,
.seq_show = cgroup_subtree_control_show,
.write = cgroup_subtree_control_write,
},
{
.name = "cgroup.populated",
.flags = CFTYPE_ONLY_ON_DFL | CFTYPE_NOT_ON_ROOT,
.seq_show = cgroup_populated_show,
},
/*
* Historical crazy stuff. These don't have "cgroup." prefix and
* don't exist if sane_behavior. If you're depending on these, be
* prepared to be burned.
*/
{ {
.name = "tasks", .name = "tasks",
.flags = CFTYPE_INSANE, /* use "procs" instead */
.seq_start = cgroup_pidlist_start, .seq_start = cgroup_pidlist_start,
.seq_next = cgroup_pidlist_next, .seq_next = cgroup_pidlist_next,
.seq_stop = cgroup_pidlist_stop, .seq_stop = cgroup_pidlist_stop,
@ -4102,13 +4252,12 @@ static struct cftype cgroup_base_files[] = {
}, },
{ {
.name = "notify_on_release", .name = "notify_on_release",
.flags = CFTYPE_INSANE,
.read_u64 = cgroup_read_notify_on_release, .read_u64 = cgroup_read_notify_on_release,
.write_u64 = cgroup_write_notify_on_release, .write_u64 = cgroup_write_notify_on_release,
}, },
{ {
.name = "release_agent", .name = "release_agent",
.flags = CFTYPE_INSANE | CFTYPE_ONLY_ON_ROOT, .flags = CFTYPE_ONLY_ON_ROOT,
.seq_show = cgroup_release_agent_show, .seq_show = cgroup_release_agent_show,
.write = cgroup_release_agent_write, .write = cgroup_release_agent_write,
.max_write_len = PATH_MAX - 1, .max_write_len = PATH_MAX - 1,
@ -4316,12 +4465,14 @@ static void offline_css(struct cgroup_subsys_state *css)
* create_css - create a cgroup_subsys_state * create_css - create a cgroup_subsys_state
* @cgrp: the cgroup new css will be associated with * @cgrp: the cgroup new css will be associated with
* @ss: the subsys of new css * @ss: the subsys of new css
* @visible: whether to create control knobs for the new css or not
* *
* Create a new css associated with @cgrp - @ss pair. On success, the new * Create a new css associated with @cgrp - @ss pair. On success, the new
* css is online and installed in @cgrp with all interface files created. * css is online and installed in @cgrp with all interface files created if
* Returns 0 on success, -errno on failure. * @visible. Returns 0 on success, -errno on failure.
*/ */
static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss) static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss,
bool visible)
{ {
struct cgroup *parent = cgroup_parent(cgrp); struct cgroup *parent = cgroup_parent(cgrp);
struct cgroup_subsys_state *parent_css = cgroup_css(parent, ss); struct cgroup_subsys_state *parent_css = cgroup_css(parent, ss);
@ -4345,9 +4496,11 @@ static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss)
goto err_free_percpu_ref; goto err_free_percpu_ref;
css->id = err; css->id = err;
err = cgroup_populate_dir(cgrp, 1 << ss->id); if (visible) {
if (err) err = cgroup_populate_dir(cgrp, 1 << ss->id);
goto err_free_id; if (err)
goto err_free_id;
}
/* @css is ready to be brought online now, make it visible */ /* @css is ready to be brought online now, make it visible */
list_add_tail_rcu(&css->sibling, &parent_css->children); list_add_tail_rcu(&css->sibling, &parent_css->children);
@ -4387,6 +4540,7 @@ static int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
struct cgroup_root *root; struct cgroup_root *root;
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
struct kernfs_node *kn; struct kernfs_node *kn;
struct cftype *base_files;
int ssid, ret; int ssid, ret;
parent = cgroup_kn_lock_live(parent_kn); parent = cgroup_kn_lock_live(parent_kn);
@ -4457,14 +4611,20 @@ static int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
if (ret) if (ret)
goto out_destroy; goto out_destroy;
ret = cgroup_addrm_files(cgrp, cgroup_base_files, true); if (cgroup_on_dfl(cgrp))
base_files = cgroup_dfl_base_files;
else
base_files = cgroup_legacy_base_files;
ret = cgroup_addrm_files(cgrp, base_files, true);
if (ret) if (ret)
goto out_destroy; goto out_destroy;
/* let's create and online css's */ /* let's create and online css's */
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (parent->child_subsys_mask & (1 << ssid)) { if (parent->child_subsys_mask & (1 << ssid)) {
ret = create_css(cgrp, ss); ret = create_css(cgrp, ss,
parent->subtree_control & (1 << ssid));
if (ret) if (ret)
goto out_destroy; goto out_destroy;
} }
@ -4472,10 +4632,12 @@ static int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
/* /*
* On the default hierarchy, a child doesn't automatically inherit * On the default hierarchy, a child doesn't automatically inherit
* child_subsys_mask from the parent. Each is configured manually. * subtree_control from the parent. Each is configured manually.
*/ */
if (!cgroup_on_dfl(cgrp)) if (!cgroup_on_dfl(cgrp)) {
cgrp->child_subsys_mask = parent->child_subsys_mask; cgrp->subtree_control = parent->subtree_control;
cgroup_refresh_child_subsys_mask(cgrp);
}
kernfs_activate(kn); kernfs_activate(kn);
@ -4738,8 +4900,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
*/ */
int __init cgroup_init_early(void) int __init cgroup_init_early(void)
{ {
static struct cgroup_sb_opts __initdata opts = static struct cgroup_sb_opts __initdata opts;
{ .flags = CGRP_ROOT_SANE_BEHAVIOR };
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
int i; int i;
@ -4777,7 +4938,8 @@ int __init cgroup_init(void)
unsigned long key; unsigned long key;
int ssid, err; int ssid, err;
BUG_ON(cgroup_init_cftypes(NULL, cgroup_base_files)); BUG_ON(cgroup_init_cftypes(NULL, cgroup_dfl_base_files));
BUG_ON(cgroup_init_cftypes(NULL, cgroup_legacy_base_files));
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
@ -4809,9 +4971,22 @@ int __init cgroup_init(void)
* disabled flag and cftype registration needs kmalloc, * disabled flag and cftype registration needs kmalloc,
* both of which aren't available during early_init. * both of which aren't available during early_init.
*/ */
if (!ss->disabled) { if (ss->disabled)
cgrp_dfl_root.subsys_mask |= 1 << ss->id; continue;
WARN_ON(cgroup_add_cftypes(ss, ss->base_cftypes));
cgrp_dfl_root.subsys_mask |= 1 << ss->id;
if (cgroup_legacy_files_on_dfl && !ss->dfl_cftypes)
ss->dfl_cftypes = ss->legacy_cftypes;
if (!ss->dfl_cftypes)
cgrp_dfl_root_inhibit_ss_mask |= 1 << ss->id;
if (ss->dfl_cftypes == ss->legacy_cftypes) {
WARN_ON(cgroup_add_cftypes(ss, ss->dfl_cftypes));
} else {
WARN_ON(cgroup_add_dfl_cftypes(ss, ss->dfl_cftypes));
WARN_ON(cgroup_add_legacy_cftypes(ss, ss->legacy_cftypes));
} }
} }
@ -5207,6 +5382,14 @@ static int __init cgroup_disable(char *str)
} }
__setup("cgroup_disable=", cgroup_disable); __setup("cgroup_disable=", cgroup_disable);
static int __init cgroup_set_legacy_files_on_dfl(char *str)
{
printk("cgroup: using legacy files on the default hierarchy\n");
cgroup_legacy_files_on_dfl = true;
return 0;
}
__setup("cgroup__DEVEL__legacy_files_on_dfl", cgroup_set_legacy_files_on_dfl);
/** /**
* css_tryget_online_from_dir - get corresponding css from a cgroup dentry * css_tryget_online_from_dir - get corresponding css from a cgroup dentry
* @dentry: directory dentry of interest * @dentry: directory dentry of interest
@ -5401,6 +5584,6 @@ static struct cftype debug_files[] = {
struct cgroup_subsys debug_cgrp_subsys = { struct cgroup_subsys debug_cgrp_subsys = {
.css_alloc = debug_css_alloc, .css_alloc = debug_css_alloc,
.css_free = debug_css_free, .css_free = debug_css_free,
.base_cftypes = debug_files, .legacy_cftypes = debug_files,
}; };
#endif /* CONFIG_CGROUP_DEBUG */ #endif /* CONFIG_CGROUP_DEBUG */

View File

@ -480,5 +480,5 @@ struct cgroup_subsys freezer_cgrp_subsys = {
.css_free = freezer_css_free, .css_free = freezer_css_free,
.attach = freezer_attach, .attach = freezer_attach,
.fork = freezer_fork, .fork = freezer_fork,
.base_cftypes = files, .legacy_cftypes = files,
}; };

View File

@ -76,8 +76,34 @@ struct cpuset {
struct cgroup_subsys_state css; struct cgroup_subsys_state css;
unsigned long flags; /* "unsigned long" so bitops work */ unsigned long flags; /* "unsigned long" so bitops work */
cpumask_var_t cpus_allowed; /* CPUs allowed to tasks in cpuset */
nodemask_t mems_allowed; /* Memory Nodes allowed to tasks */ /*
* On default hierarchy:
*
* The user-configured masks can only be changed by writing to
* cpuset.cpus and cpuset.mems, and won't be limited by the
* parent masks.
*
* The effective masks is the real masks that apply to the tasks
* in the cpuset. They may be changed if the configured masks are
* changed or hotplug happens.
*
* effective_mask == configured_mask & parent's effective_mask,
* and if it ends up empty, it will inherit the parent's mask.
*
*
* On legacy hierachy:
*
* The user-configured masks are always the same with effective masks.
*/
/* user-configured CPUs and Memory Nodes allow to tasks */
cpumask_var_t cpus_allowed;
nodemask_t mems_allowed;
/* effective CPUs and Memory Nodes allow to tasks */
cpumask_var_t effective_cpus;
nodemask_t effective_mems;
/* /*
* This is old Memory Nodes tasks took on. * This is old Memory Nodes tasks took on.
@ -307,9 +333,9 @@ static struct file_system_type cpuset_fs_type = {
*/ */
static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask) static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask)
{ {
while (!cpumask_intersects(cs->cpus_allowed, cpu_online_mask)) while (!cpumask_intersects(cs->effective_cpus, cpu_online_mask))
cs = parent_cs(cs); cs = parent_cs(cs);
cpumask_and(pmask, cs->cpus_allowed, cpu_online_mask); cpumask_and(pmask, cs->effective_cpus, cpu_online_mask);
} }
/* /*
@ -325,9 +351,9 @@ static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask)
*/ */
static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask) static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
{ {
while (!nodes_intersects(cs->mems_allowed, node_states[N_MEMORY])) while (!nodes_intersects(cs->effective_mems, node_states[N_MEMORY]))
cs = parent_cs(cs); cs = parent_cs(cs);
nodes_and(*pmask, cs->mems_allowed, node_states[N_MEMORY]); nodes_and(*pmask, cs->effective_mems, node_states[N_MEMORY]);
} }
/* /*
@ -376,13 +402,20 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
if (!trial) if (!trial)
return NULL; return NULL;
if (!alloc_cpumask_var(&trial->cpus_allowed, GFP_KERNEL)) { if (!alloc_cpumask_var(&trial->cpus_allowed, GFP_KERNEL))
kfree(trial); goto free_cs;
return NULL; if (!alloc_cpumask_var(&trial->effective_cpus, GFP_KERNEL))
} goto free_cpus;
cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
cpumask_copy(trial->effective_cpus, cs->effective_cpus);
return trial; return trial;
free_cpus:
free_cpumask_var(trial->cpus_allowed);
free_cs:
kfree(trial);
return NULL;
} }
/** /**
@ -391,6 +424,7 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
*/ */
static void free_trial_cpuset(struct cpuset *trial) static void free_trial_cpuset(struct cpuset *trial)
{ {
free_cpumask_var(trial->effective_cpus);
free_cpumask_var(trial->cpus_allowed); free_cpumask_var(trial->cpus_allowed);
kfree(trial); kfree(trial);
} }
@ -436,9 +470,9 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
par = parent_cs(cur); par = parent_cs(cur);
/* We must be a subset of our parent cpuset */ /* On legacy hiearchy, we must be a subset of our parent cpuset. */
ret = -EACCES; ret = -EACCES;
if (!is_cpuset_subset(trial, par)) if (!cgroup_on_dfl(cur->css.cgroup) && !is_cpuset_subset(trial, par))
goto out; goto out;
/* /*
@ -480,11 +514,11 @@ out:
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* /*
* Helper routine for generate_sched_domains(). * Helper routine for generate_sched_domains().
* Do cpusets a, b have overlapping cpus_allowed masks? * Do cpusets a, b have overlapping effective cpus_allowed masks?
*/ */
static int cpusets_overlap(struct cpuset *a, struct cpuset *b) static int cpusets_overlap(struct cpuset *a, struct cpuset *b)
{ {
return cpumask_intersects(a->cpus_allowed, b->cpus_allowed); return cpumask_intersects(a->effective_cpus, b->effective_cpus);
} }
static void static void
@ -601,7 +635,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
*dattr = SD_ATTR_INIT; *dattr = SD_ATTR_INIT;
update_domain_attr_tree(dattr, &top_cpuset); update_domain_attr_tree(dattr, &top_cpuset);
} }
cpumask_copy(doms[0], top_cpuset.cpus_allowed); cpumask_copy(doms[0], top_cpuset.effective_cpus);
goto done; goto done;
} }
@ -705,7 +739,7 @@ restart:
struct cpuset *b = csa[j]; struct cpuset *b = csa[j];
if (apn == b->pn) { if (apn == b->pn) {
cpumask_or(dp, dp, b->cpus_allowed); cpumask_or(dp, dp, b->effective_cpus);
if (dattr) if (dattr)
update_domain_attr_tree(dattr + nslot, b); update_domain_attr_tree(dattr + nslot, b);
@ -757,7 +791,7 @@ static void rebuild_sched_domains_locked(void)
* passing doms with offlined cpu to partition_sched_domains(). * passing doms with offlined cpu to partition_sched_domains().
* Anyways, hotplug work item will rebuild sched domains. * Anyways, hotplug work item will rebuild sched domains.
*/ */
if (!cpumask_equal(top_cpuset.cpus_allowed, cpu_active_mask)) if (!cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask))
goto out; goto out;
/* Generate domain masks and attrs */ /* Generate domain masks and attrs */
@ -781,45 +815,6 @@ void rebuild_sched_domains(void)
mutex_unlock(&cpuset_mutex); mutex_unlock(&cpuset_mutex);
} }
/*
* effective_cpumask_cpuset - return nearest ancestor with non-empty cpus
* @cs: the cpuset in interest
*
* A cpuset's effective cpumask is the cpumask of the nearest ancestor
* with non-empty cpus. We use effective cpumask whenever:
* - we update tasks' cpus_allowed. (they take on the ancestor's cpumask
* if the cpuset they reside in has no cpus)
* - we want to retrieve task_cs(tsk)'s cpus_allowed.
*
* Called with cpuset_mutex held. cpuset_cpus_allowed_fallback() is an
* exception. See comments there.
*/
static struct cpuset *effective_cpumask_cpuset(struct cpuset *cs)
{
while (cpumask_empty(cs->cpus_allowed))
cs = parent_cs(cs);
return cs;
}
/*
* effective_nodemask_cpuset - return nearest ancestor with non-empty mems
* @cs: the cpuset in interest
*
* A cpuset's effective nodemask is the nodemask of the nearest ancestor
* with non-empty memss. We use effective nodemask whenever:
* - we update tasks' mems_allowed. (they take on the ancestor's nodemask
* if the cpuset they reside in has no mems)
* - we want to retrieve task_cs(tsk)'s mems_allowed.
*
* Called with cpuset_mutex held.
*/
static struct cpuset *effective_nodemask_cpuset(struct cpuset *cs)
{
while (nodes_empty(cs->mems_allowed))
cs = parent_cs(cs);
return cs;
}
/** /**
* update_tasks_cpumask - Update the cpumasks of tasks in the cpuset. * update_tasks_cpumask - Update the cpumasks of tasks in the cpuset.
* @cs: the cpuset in which each task's cpus_allowed mask needs to be changed * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed
@ -830,53 +825,80 @@ static struct cpuset *effective_nodemask_cpuset(struct cpuset *cs)
*/ */
static void update_tasks_cpumask(struct cpuset *cs) static void update_tasks_cpumask(struct cpuset *cs)
{ {
struct cpuset *cpus_cs = effective_cpumask_cpuset(cs);
struct css_task_iter it; struct css_task_iter it;
struct task_struct *task; struct task_struct *task;
css_task_iter_start(&cs->css, &it); css_task_iter_start(&cs->css, &it);
while ((task = css_task_iter_next(&it))) while ((task = css_task_iter_next(&it)))
set_cpus_allowed_ptr(task, cpus_cs->cpus_allowed); set_cpus_allowed_ptr(task, cs->effective_cpus);
css_task_iter_end(&it); css_task_iter_end(&it);
} }
/* /*
* update_tasks_cpumask_hier - Update the cpumasks of tasks in the hierarchy. * update_cpumasks_hier - Update effective cpumasks and tasks in the subtree
* @root_cs: the root cpuset of the hierarchy * @cs: the cpuset to consider
* @update_root: update root cpuset or not? * @new_cpus: temp variable for calculating new effective_cpus
* *
* This will update cpumasks of tasks in @root_cs and all other empty cpusets * When congifured cpumask is changed, the effective cpumasks of this cpuset
* which take on cpumask of @root_cs. * and all its descendants need to be updated.
*
* On legacy hierachy, effective_cpus will be the same with cpu_allowed.
* *
* Called with cpuset_mutex held * Called with cpuset_mutex held
*/ */
static void update_tasks_cpumask_hier(struct cpuset *root_cs, bool update_root) static void update_cpumasks_hier(struct cpuset *cs, struct cpumask *new_cpus)
{ {
struct cpuset *cp; struct cpuset *cp;
struct cgroup_subsys_state *pos_css; struct cgroup_subsys_state *pos_css;
bool need_rebuild_sched_domains = false;
rcu_read_lock(); rcu_read_lock();
cpuset_for_each_descendant_pre(cp, pos_css, root_cs) { cpuset_for_each_descendant_pre(cp, pos_css, cs) {
if (cp == root_cs) { struct cpuset *parent = parent_cs(cp);
if (!update_root)
continue; cpumask_and(new_cpus, cp->cpus_allowed, parent->effective_cpus);
} else {
/* skip the whole subtree if @cp have some CPU */ /*
if (!cpumask_empty(cp->cpus_allowed)) { * If it becomes empty, inherit the effective mask of the
pos_css = css_rightmost_descendant(pos_css); * parent, which is guaranteed to have some CPUs.
continue; */
} if (cpumask_empty(new_cpus))
cpumask_copy(new_cpus, parent->effective_cpus);
/* Skip the whole subtree if the cpumask remains the same. */
if (cpumask_equal(new_cpus, cp->effective_cpus)) {
pos_css = css_rightmost_descendant(pos_css);
continue;
} }
if (!css_tryget_online(&cp->css)) if (!css_tryget_online(&cp->css))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
mutex_lock(&callback_mutex);
cpumask_copy(cp->effective_cpus, new_cpus);
mutex_unlock(&callback_mutex);
WARN_ON(!cgroup_on_dfl(cp->css.cgroup) &&
!cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
update_tasks_cpumask(cp); update_tasks_cpumask(cp);
/*
* If the effective cpumask of any non-empty cpuset is changed,
* we need to rebuild sched domains.
*/
if (!cpumask_empty(cp->cpus_allowed) &&
is_sched_load_balance(cp))
need_rebuild_sched_domains = true;
rcu_read_lock(); rcu_read_lock();
css_put(&cp->css); css_put(&cp->css);
} }
rcu_read_unlock(); rcu_read_unlock();
if (need_rebuild_sched_domains)
rebuild_sched_domains_locked();
} }
/** /**
@ -889,7 +911,6 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
const char *buf) const char *buf)
{ {
int retval; int retval;
int is_load_balanced;
/* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */ /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */
if (cs == &top_cpuset) if (cs == &top_cpuset)
@ -908,7 +929,8 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (retval < 0) if (retval < 0)
return retval; return retval;
if (!cpumask_subset(trialcs->cpus_allowed, cpu_active_mask)) if (!cpumask_subset(trialcs->cpus_allowed,
top_cpuset.cpus_allowed))
return -EINVAL; return -EINVAL;
} }
@ -920,16 +942,12 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (retval < 0) if (retval < 0)
return retval; return retval;
is_load_balanced = is_sched_load_balance(trialcs);
mutex_lock(&callback_mutex); mutex_lock(&callback_mutex);
cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed); cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);
update_tasks_cpumask_hier(cs, true); /* use trialcs->cpus_allowed as a temp variable */
update_cpumasks_hier(cs, trialcs->cpus_allowed);
if (is_load_balanced)
rebuild_sched_domains_locked();
return 0; return 0;
} }
@ -951,15 +969,13 @@ static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from,
const nodemask_t *to) const nodemask_t *to)
{ {
struct task_struct *tsk = current; struct task_struct *tsk = current;
struct cpuset *mems_cs;
tsk->mems_allowed = *to; tsk->mems_allowed = *to;
do_migrate_pages(mm, from, to, MPOL_MF_MOVE_ALL); do_migrate_pages(mm, from, to, MPOL_MF_MOVE_ALL);
rcu_read_lock(); rcu_read_lock();
mems_cs = effective_nodemask_cpuset(task_cs(tsk)); guarantee_online_mems(task_cs(tsk), &tsk->mems_allowed);
guarantee_online_mems(mems_cs, &tsk->mems_allowed);
rcu_read_unlock(); rcu_read_unlock();
} }
@ -1028,13 +1044,12 @@ static void *cpuset_being_rebound;
static void update_tasks_nodemask(struct cpuset *cs) static void update_tasks_nodemask(struct cpuset *cs)
{ {
static nodemask_t newmems; /* protected by cpuset_mutex */ static nodemask_t newmems; /* protected by cpuset_mutex */
struct cpuset *mems_cs = effective_nodemask_cpuset(cs);
struct css_task_iter it; struct css_task_iter it;
struct task_struct *task; struct task_struct *task;
cpuset_being_rebound = cs; /* causes mpol_dup() rebind */ cpuset_being_rebound = cs; /* causes mpol_dup() rebind */
guarantee_online_mems(mems_cs, &newmems); guarantee_online_mems(cs, &newmems);
/* /*
* The mpol_rebind_mm() call takes mmap_sem, which we couldn't * The mpol_rebind_mm() call takes mmap_sem, which we couldn't
@ -1077,36 +1092,52 @@ static void update_tasks_nodemask(struct cpuset *cs)
} }
/* /*
* update_tasks_nodemask_hier - Update the nodemasks of tasks in the hierarchy. * update_nodemasks_hier - Update effective nodemasks and tasks in the subtree
* @cs: the root cpuset of the hierarchy * @cs: the cpuset to consider
* @update_root: update the root cpuset or not? * @new_mems: a temp variable for calculating new effective_mems
* *
* This will update nodemasks of tasks in @root_cs and all other empty cpusets * When configured nodemask is changed, the effective nodemasks of this cpuset
* which take on nodemask of @root_cs. * and all its descendants need to be updated.
*
* On legacy hiearchy, effective_mems will be the same with mems_allowed.
* *
* Called with cpuset_mutex held * Called with cpuset_mutex held
*/ */
static void update_tasks_nodemask_hier(struct cpuset *root_cs, bool update_root) static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
{ {
struct cpuset *cp; struct cpuset *cp;
struct cgroup_subsys_state *pos_css; struct cgroup_subsys_state *pos_css;
rcu_read_lock(); rcu_read_lock();
cpuset_for_each_descendant_pre(cp, pos_css, root_cs) { cpuset_for_each_descendant_pre(cp, pos_css, cs) {
if (cp == root_cs) { struct cpuset *parent = parent_cs(cp);
if (!update_root)
continue; nodes_and(*new_mems, cp->mems_allowed, parent->effective_mems);
} else {
/* skip the whole subtree if @cp have some CPU */ /*
if (!nodes_empty(cp->mems_allowed)) { * If it becomes empty, inherit the effective mask of the
pos_css = css_rightmost_descendant(pos_css); * parent, which is guaranteed to have some MEMs.
continue; */
} if (nodes_empty(*new_mems))
*new_mems = parent->effective_mems;
/* Skip the whole subtree if the nodemask remains the same. */
if (nodes_equal(*new_mems, cp->effective_mems)) {
pos_css = css_rightmost_descendant(pos_css);
continue;
} }
if (!css_tryget_online(&cp->css)) if (!css_tryget_online(&cp->css))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
mutex_lock(&callback_mutex);
cp->effective_mems = *new_mems;
mutex_unlock(&callback_mutex);
WARN_ON(!cgroup_on_dfl(cp->css.cgroup) &&
!nodes_equal(cp->mems_allowed, cp->effective_mems));
update_tasks_nodemask(cp); update_tasks_nodemask(cp);
rcu_read_lock(); rcu_read_lock();
@ -1156,8 +1187,8 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
goto done; goto done;
if (!nodes_subset(trialcs->mems_allowed, if (!nodes_subset(trialcs->mems_allowed,
node_states[N_MEMORY])) { top_cpuset.mems_allowed)) {
retval = -EINVAL; retval = -EINVAL;
goto done; goto done;
} }
} }
@ -1174,7 +1205,8 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
cs->mems_allowed = trialcs->mems_allowed; cs->mems_allowed = trialcs->mems_allowed;
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);
update_tasks_nodemask_hier(cs, true); /* use trialcs->mems_allowed as a temp variable */
update_nodemasks_hier(cs, &cs->mems_allowed);
done: done:
return retval; return retval;
} }
@ -1389,12 +1421,9 @@ static int cpuset_can_attach(struct cgroup_subsys_state *css,
mutex_lock(&cpuset_mutex); mutex_lock(&cpuset_mutex);
/* /* allow moving tasks into an empty cpuset if on default hierarchy */
* We allow to move tasks into an empty cpuset if sane_behavior
* flag is set.
*/
ret = -ENOSPC; ret = -ENOSPC;
if (!cgroup_sane_behavior(css->cgroup) && if (!cgroup_on_dfl(css->cgroup) &&
(cpumask_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed))) (cpumask_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed)))
goto out_unlock; goto out_unlock;
@ -1452,8 +1481,6 @@ static void cpuset_attach(struct cgroup_subsys_state *css,
struct task_struct *leader = cgroup_taskset_first(tset); struct task_struct *leader = cgroup_taskset_first(tset);
struct cpuset *cs = css_cs(css); struct cpuset *cs = css_cs(css);
struct cpuset *oldcs = cpuset_attach_old_cs; struct cpuset *oldcs = cpuset_attach_old_cs;
struct cpuset *cpus_cs = effective_cpumask_cpuset(cs);
struct cpuset *mems_cs = effective_nodemask_cpuset(cs);
mutex_lock(&cpuset_mutex); mutex_lock(&cpuset_mutex);
@ -1461,9 +1488,9 @@ static void cpuset_attach(struct cgroup_subsys_state *css,
if (cs == &top_cpuset) if (cs == &top_cpuset)
cpumask_copy(cpus_attach, cpu_possible_mask); cpumask_copy(cpus_attach, cpu_possible_mask);
else else
guarantee_online_cpus(cpus_cs, cpus_attach); guarantee_online_cpus(cs, cpus_attach);
guarantee_online_mems(mems_cs, &cpuset_attach_nodemask_to); guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
cgroup_taskset_for_each(task, tset) { cgroup_taskset_for_each(task, tset) {
/* /*
@ -1480,11 +1507,9 @@ static void cpuset_attach(struct cgroup_subsys_state *css,
* Change mm, possibly for multiple threads in a threadgroup. This is * Change mm, possibly for multiple threads in a threadgroup. This is
* expensive and may sleep. * expensive and may sleep.
*/ */
cpuset_attach_nodemask_to = cs->mems_allowed; cpuset_attach_nodemask_to = cs->effective_mems;
mm = get_task_mm(leader); mm = get_task_mm(leader);
if (mm) { if (mm) {
struct cpuset *mems_oldcs = effective_nodemask_cpuset(oldcs);
mpol_rebind_mm(mm, &cpuset_attach_nodemask_to); mpol_rebind_mm(mm, &cpuset_attach_nodemask_to);
/* /*
@ -1495,7 +1520,7 @@ static void cpuset_attach(struct cgroup_subsys_state *css,
* mm from. * mm from.
*/ */
if (is_memory_migrate(cs)) { if (is_memory_migrate(cs)) {
cpuset_migrate_mm(mm, &mems_oldcs->old_mems_allowed, cpuset_migrate_mm(mm, &oldcs->old_mems_allowed,
&cpuset_attach_nodemask_to); &cpuset_attach_nodemask_to);
} }
mmput(mm); mmput(mm);
@ -1516,6 +1541,8 @@ typedef enum {
FILE_MEMORY_MIGRATE, FILE_MEMORY_MIGRATE,
FILE_CPULIST, FILE_CPULIST,
FILE_MEMLIST, FILE_MEMLIST,
FILE_EFFECTIVE_CPULIST,
FILE_EFFECTIVE_MEMLIST,
FILE_CPU_EXCLUSIVE, FILE_CPU_EXCLUSIVE,
FILE_MEM_EXCLUSIVE, FILE_MEM_EXCLUSIVE,
FILE_MEM_HARDWALL, FILE_MEM_HARDWALL,
@ -1694,6 +1721,12 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
case FILE_MEMLIST: case FILE_MEMLIST:
s += nodelist_scnprintf(s, count, cs->mems_allowed); s += nodelist_scnprintf(s, count, cs->mems_allowed);
break; break;
case FILE_EFFECTIVE_CPULIST:
s += cpulist_scnprintf(s, count, cs->effective_cpus);
break;
case FILE_EFFECTIVE_MEMLIST:
s += nodelist_scnprintf(s, count, cs->effective_mems);
break;
default: default:
ret = -EINVAL; ret = -EINVAL;
goto out_unlock; goto out_unlock;
@ -1778,6 +1811,18 @@ static struct cftype files[] = {
.private = FILE_MEMLIST, .private = FILE_MEMLIST,
}, },
{
.name = "effective_cpus",
.seq_show = cpuset_common_seq_show,
.private = FILE_EFFECTIVE_CPULIST,
},
{
.name = "effective_mems",
.seq_show = cpuset_common_seq_show,
.private = FILE_EFFECTIVE_MEMLIST,
},
{ {
.name = "cpu_exclusive", .name = "cpu_exclusive",
.read_u64 = cpuset_read_u64, .read_u64 = cpuset_read_u64,
@ -1869,18 +1914,26 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css)
cs = kzalloc(sizeof(*cs), GFP_KERNEL); cs = kzalloc(sizeof(*cs), GFP_KERNEL);
if (!cs) if (!cs)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
if (!alloc_cpumask_var(&cs->cpus_allowed, GFP_KERNEL)) { if (!alloc_cpumask_var(&cs->cpus_allowed, GFP_KERNEL))
kfree(cs); goto free_cs;
return ERR_PTR(-ENOMEM); if (!alloc_cpumask_var(&cs->effective_cpus, GFP_KERNEL))
} goto free_cpus;
set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
cpumask_clear(cs->cpus_allowed); cpumask_clear(cs->cpus_allowed);
nodes_clear(cs->mems_allowed); nodes_clear(cs->mems_allowed);
cpumask_clear(cs->effective_cpus);
nodes_clear(cs->effective_mems);
fmeter_init(&cs->fmeter); fmeter_init(&cs->fmeter);
cs->relax_domain_level = -1; cs->relax_domain_level = -1;
return &cs->css; return &cs->css;
free_cpus:
free_cpumask_var(cs->cpus_allowed);
free_cs:
kfree(cs);
return ERR_PTR(-ENOMEM);
} }
static int cpuset_css_online(struct cgroup_subsys_state *css) static int cpuset_css_online(struct cgroup_subsys_state *css)
@ -1903,6 +1956,13 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
cpuset_inc(); cpuset_inc();
mutex_lock(&callback_mutex);
if (cgroup_on_dfl(cs->css.cgroup)) {
cpumask_copy(cs->effective_cpus, parent->effective_cpus);
cs->effective_mems = parent->effective_mems;
}
mutex_unlock(&callback_mutex);
if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags)) if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags))
goto out_unlock; goto out_unlock;
@ -1962,20 +2022,40 @@ static void cpuset_css_free(struct cgroup_subsys_state *css)
{ {
struct cpuset *cs = css_cs(css); struct cpuset *cs = css_cs(css);
free_cpumask_var(cs->effective_cpus);
free_cpumask_var(cs->cpus_allowed); free_cpumask_var(cs->cpus_allowed);
kfree(cs); kfree(cs);
} }
static void cpuset_bind(struct cgroup_subsys_state *root_css)
{
mutex_lock(&cpuset_mutex);
mutex_lock(&callback_mutex);
if (cgroup_on_dfl(root_css->cgroup)) {
cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask);
top_cpuset.mems_allowed = node_possible_map;
} else {
cpumask_copy(top_cpuset.cpus_allowed,
top_cpuset.effective_cpus);
top_cpuset.mems_allowed = top_cpuset.effective_mems;
}
mutex_unlock(&callback_mutex);
mutex_unlock(&cpuset_mutex);
}
struct cgroup_subsys cpuset_cgrp_subsys = { struct cgroup_subsys cpuset_cgrp_subsys = {
.css_alloc = cpuset_css_alloc, .css_alloc = cpuset_css_alloc,
.css_online = cpuset_css_online, .css_online = cpuset_css_online,
.css_offline = cpuset_css_offline, .css_offline = cpuset_css_offline,
.css_free = cpuset_css_free, .css_free = cpuset_css_free,
.can_attach = cpuset_can_attach, .can_attach = cpuset_can_attach,
.cancel_attach = cpuset_cancel_attach, .cancel_attach = cpuset_cancel_attach,
.attach = cpuset_attach, .attach = cpuset_attach,
.base_cftypes = files, .bind = cpuset_bind,
.early_init = 1, .legacy_cftypes = files,
.early_init = 1,
}; };
/** /**
@ -1990,9 +2070,13 @@ int __init cpuset_init(void)
if (!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL)) if (!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL))
BUG(); BUG();
if (!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL))
BUG();
cpumask_setall(top_cpuset.cpus_allowed); cpumask_setall(top_cpuset.cpus_allowed);
nodes_setall(top_cpuset.mems_allowed); nodes_setall(top_cpuset.mems_allowed);
cpumask_setall(top_cpuset.effective_cpus);
nodes_setall(top_cpuset.effective_mems);
fmeter_init(&top_cpuset.fmeter); fmeter_init(&top_cpuset.fmeter);
set_bit(CS_SCHED_LOAD_BALANCE, &top_cpuset.flags); set_bit(CS_SCHED_LOAD_BALANCE, &top_cpuset.flags);
@ -2035,6 +2119,66 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs)
} }
} }
static void
hotplug_update_tasks_legacy(struct cpuset *cs,
struct cpumask *new_cpus, nodemask_t *new_mems,
bool cpus_updated, bool mems_updated)
{
bool is_empty;
mutex_lock(&callback_mutex);
cpumask_copy(cs->cpus_allowed, new_cpus);
cpumask_copy(cs->effective_cpus, new_cpus);
cs->mems_allowed = *new_mems;
cs->effective_mems = *new_mems;
mutex_unlock(&callback_mutex);
/*
* Don't call update_tasks_cpumask() if the cpuset becomes empty,
* as the tasks will be migratecd to an ancestor.
*/
if (cpus_updated && !cpumask_empty(cs->cpus_allowed))
update_tasks_cpumask(cs);
if (mems_updated && !nodes_empty(cs->mems_allowed))
update_tasks_nodemask(cs);
is_empty = cpumask_empty(cs->cpus_allowed) ||
nodes_empty(cs->mems_allowed);
mutex_unlock(&cpuset_mutex);
/*
* Move tasks to the nearest ancestor with execution resources,
* This is full cgroup operation which will also call back into
* cpuset. Should be done outside any lock.
*/
if (is_empty)
remove_tasks_in_empty_cpuset(cs);
mutex_lock(&cpuset_mutex);
}
static void
hotplug_update_tasks(struct cpuset *cs,
struct cpumask *new_cpus, nodemask_t *new_mems,
bool cpus_updated, bool mems_updated)
{
if (cpumask_empty(new_cpus))
cpumask_copy(new_cpus, parent_cs(cs)->effective_cpus);
if (nodes_empty(*new_mems))
*new_mems = parent_cs(cs)->effective_mems;
mutex_lock(&callback_mutex);
cpumask_copy(cs->effective_cpus, new_cpus);
cs->effective_mems = *new_mems;
mutex_unlock(&callback_mutex);
if (cpus_updated)
update_tasks_cpumask(cs);
if (mems_updated)
update_tasks_nodemask(cs);
}
/** /**
* cpuset_hotplug_update_tasks - update tasks in a cpuset for hotunplug * cpuset_hotplug_update_tasks - update tasks in a cpuset for hotunplug
* @cs: cpuset in interest * @cs: cpuset in interest
@ -2045,11 +2189,10 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs)
*/ */
static void cpuset_hotplug_update_tasks(struct cpuset *cs) static void cpuset_hotplug_update_tasks(struct cpuset *cs)
{ {
static cpumask_t off_cpus; static cpumask_t new_cpus;
static nodemask_t off_mems; static nodemask_t new_mems;
bool is_empty; bool cpus_updated;
bool sane = cgroup_sane_behavior(cs->css.cgroup); bool mems_updated;
retry: retry:
wait_event(cpuset_attach_wq, cs->attach_in_progress == 0); wait_event(cpuset_attach_wq, cs->attach_in_progress == 0);
@ -2064,51 +2207,20 @@ retry:
goto retry; goto retry;
} }
cpumask_andnot(&off_cpus, cs->cpus_allowed, top_cpuset.cpus_allowed); cpumask_and(&new_cpus, cs->cpus_allowed, parent_cs(cs)->effective_cpus);
nodes_andnot(off_mems, cs->mems_allowed, top_cpuset.mems_allowed); nodes_and(new_mems, cs->mems_allowed, parent_cs(cs)->effective_mems);
mutex_lock(&callback_mutex); cpus_updated = !cpumask_equal(&new_cpus, cs->effective_cpus);
cpumask_andnot(cs->cpus_allowed, cs->cpus_allowed, &off_cpus); mems_updated = !nodes_equal(new_mems, cs->effective_mems);
mutex_unlock(&callback_mutex);
/* if (cgroup_on_dfl(cs->css.cgroup))
* If sane_behavior flag is set, we need to update tasks' cpumask hotplug_update_tasks(cs, &new_cpus, &new_mems,
* for empty cpuset to take on ancestor's cpumask. Otherwise, don't cpus_updated, mems_updated);
* call update_tasks_cpumask() if the cpuset becomes empty, as else
* the tasks in it will be migrated to an ancestor. hotplug_update_tasks_legacy(cs, &new_cpus, &new_mems,
*/ cpus_updated, mems_updated);
if ((sane && cpumask_empty(cs->cpus_allowed)) ||
(!cpumask_empty(&off_cpus) && !cpumask_empty(cs->cpus_allowed)))
update_tasks_cpumask(cs);
mutex_lock(&callback_mutex);
nodes_andnot(cs->mems_allowed, cs->mems_allowed, off_mems);
mutex_unlock(&callback_mutex);
/*
* If sane_behavior flag is set, we need to update tasks' nodemask
* for empty cpuset to take on ancestor's nodemask. Otherwise, don't
* call update_tasks_nodemask() if the cpuset becomes empty, as
* the tasks in it will be migratd to an ancestor.
*/
if ((sane && nodes_empty(cs->mems_allowed)) ||
(!nodes_empty(off_mems) && !nodes_empty(cs->mems_allowed)))
update_tasks_nodemask(cs);
is_empty = cpumask_empty(cs->cpus_allowed) ||
nodes_empty(cs->mems_allowed);
mutex_unlock(&cpuset_mutex); mutex_unlock(&cpuset_mutex);
/*
* If sane_behavior flag is set, we'll keep tasks in empty cpusets.
*
* Otherwise move tasks to the nearest ancestor with execution
* resources. This is full cgroup operation which will
* also call back into cpuset. Should be done outside any lock.
*/
if (!sane && is_empty)
remove_tasks_in_empty_cpuset(cs);
} }
/** /**
@ -2132,6 +2244,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
static cpumask_t new_cpus; static cpumask_t new_cpus;
static nodemask_t new_mems; static nodemask_t new_mems;
bool cpus_updated, mems_updated; bool cpus_updated, mems_updated;
bool on_dfl = cgroup_on_dfl(top_cpuset.css.cgroup);
mutex_lock(&cpuset_mutex); mutex_lock(&cpuset_mutex);
@ -2139,13 +2252,15 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
cpumask_copy(&new_cpus, cpu_active_mask); cpumask_copy(&new_cpus, cpu_active_mask);
new_mems = node_states[N_MEMORY]; new_mems = node_states[N_MEMORY];
cpus_updated = !cpumask_equal(top_cpuset.cpus_allowed, &new_cpus); cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus);
mems_updated = !nodes_equal(top_cpuset.mems_allowed, new_mems); mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
/* synchronize cpus_allowed to cpu_active_mask */ /* synchronize cpus_allowed to cpu_active_mask */
if (cpus_updated) { if (cpus_updated) {
mutex_lock(&callback_mutex); mutex_lock(&callback_mutex);
cpumask_copy(top_cpuset.cpus_allowed, &new_cpus); if (!on_dfl)
cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
cpumask_copy(top_cpuset.effective_cpus, &new_cpus);
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);
/* we don't mess with cpumasks of tasks in top_cpuset */ /* we don't mess with cpumasks of tasks in top_cpuset */
} }
@ -2153,7 +2268,9 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
/* synchronize mems_allowed to N_MEMORY */ /* synchronize mems_allowed to N_MEMORY */
if (mems_updated) { if (mems_updated) {
mutex_lock(&callback_mutex); mutex_lock(&callback_mutex);
top_cpuset.mems_allowed = new_mems; if (!on_dfl)
top_cpuset.mems_allowed = new_mems;
top_cpuset.effective_mems = new_mems;
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);
update_tasks_nodemask(&top_cpuset); update_tasks_nodemask(&top_cpuset);
} }
@ -2228,6 +2345,9 @@ void __init cpuset_init_smp(void)
top_cpuset.mems_allowed = node_states[N_MEMORY]; top_cpuset.mems_allowed = node_states[N_MEMORY];
top_cpuset.old_mems_allowed = top_cpuset.mems_allowed; top_cpuset.old_mems_allowed = top_cpuset.mems_allowed;
cpumask_copy(top_cpuset.effective_cpus, cpu_active_mask);
top_cpuset.effective_mems = node_states[N_MEMORY];
register_hotmemory_notifier(&cpuset_track_online_nodes_nb); register_hotmemory_notifier(&cpuset_track_online_nodes_nb);
} }
@ -2244,23 +2364,17 @@ void __init cpuset_init_smp(void)
void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
{ {
struct cpuset *cpus_cs;
mutex_lock(&callback_mutex); mutex_lock(&callback_mutex);
rcu_read_lock(); rcu_read_lock();
cpus_cs = effective_cpumask_cpuset(task_cs(tsk)); guarantee_online_cpus(task_cs(tsk), pmask);
guarantee_online_cpus(cpus_cs, pmask);
rcu_read_unlock(); rcu_read_unlock();
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);
} }
void cpuset_cpus_allowed_fallback(struct task_struct *tsk) void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
{ {
struct cpuset *cpus_cs;
rcu_read_lock(); rcu_read_lock();
cpus_cs = effective_cpumask_cpuset(task_cs(tsk)); do_set_cpus_allowed(tsk, task_cs(tsk)->effective_cpus);
do_set_cpus_allowed(tsk, cpus_cs->cpus_allowed);
rcu_read_unlock(); rcu_read_unlock();
/* /*
@ -2299,13 +2413,11 @@ void cpuset_init_current_mems_allowed(void)
nodemask_t cpuset_mems_allowed(struct task_struct *tsk) nodemask_t cpuset_mems_allowed(struct task_struct *tsk)
{ {
struct cpuset *mems_cs;
nodemask_t mask; nodemask_t mask;
mutex_lock(&callback_mutex); mutex_lock(&callback_mutex);
rcu_read_lock(); rcu_read_lock();
mems_cs = effective_nodemask_cpuset(task_cs(tsk)); guarantee_online_mems(task_cs(tsk), &mask);
guarantee_online_mems(mems_cs, &mask);
rcu_read_unlock(); rcu_read_unlock();
mutex_unlock(&callback_mutex); mutex_unlock(&callback_mutex);

View File

@ -8083,7 +8083,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
.can_attach = cpu_cgroup_can_attach, .can_attach = cpu_cgroup_can_attach,
.attach = cpu_cgroup_attach, .attach = cpu_cgroup_attach,
.exit = cpu_cgroup_exit, .exit = cpu_cgroup_exit,
.base_cftypes = cpu_files, .legacy_cftypes = cpu_files,
.early_init = 1, .early_init = 1,
}; };

View File

@ -278,6 +278,6 @@ void cpuacct_account_field(struct task_struct *p, int index, u64 val)
struct cgroup_subsys cpuacct_cgrp_subsys = { struct cgroup_subsys cpuacct_cgrp_subsys = {
.css_alloc = cpuacct_css_alloc, .css_alloc = cpuacct_css_alloc,
.css_free = cpuacct_css_free, .css_free = cpuacct_css_free,
.base_cftypes = files, .legacy_cftypes = files,
.early_init = 1, .early_init = 1,
}; };

View File

@ -358,9 +358,8 @@ static void __init __hugetlb_cgroup_file_init(int idx)
cft = &h->cgroup_files[4]; cft = &h->cgroup_files[4];
memset(cft, 0, sizeof(*cft)); memset(cft, 0, sizeof(*cft));
WARN_ON(cgroup_add_cftypes(&hugetlb_cgrp_subsys, h->cgroup_files)); WARN_ON(cgroup_add_legacy_cftypes(&hugetlb_cgrp_subsys,
h->cgroup_files));
return;
} }
void __init hugetlb_cgroup_file_init(void) void __init hugetlb_cgroup_file_init(void)

View File

@ -6007,7 +6007,6 @@ static struct cftype mem_cgroup_files[] = {
}, },
{ {
.name = "use_hierarchy", .name = "use_hierarchy",
.flags = CFTYPE_INSANE,
.write_u64 = mem_cgroup_hierarchy_write, .write_u64 = mem_cgroup_hierarchy_write,
.read_u64 = mem_cgroup_hierarchy_read, .read_u64 = mem_cgroup_hierarchy_read,
}, },
@ -6411,6 +6410,29 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
__mem_cgroup_free(memcg); __mem_cgroup_free(memcg);
} }
/**
* mem_cgroup_css_reset - reset the states of a mem_cgroup
* @css: the target css
*
* Reset the states of the mem_cgroup associated with @css. This is
* invoked when the userland requests disabling on the default hierarchy
* but the memcg is pinned through dependency. The memcg should stop
* applying policies and should revert to the vanilla state as it may be
* made visible again.
*
* The current implementation only resets the essential configurations.
* This needs to be expanded to cover all the visible parts.
*/
static void mem_cgroup_css_reset(struct cgroup_subsys_state *css)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
mem_cgroup_resize_limit(memcg, ULLONG_MAX);
mem_cgroup_resize_memsw_limit(memcg, ULLONG_MAX);
memcg_update_kmem_limit(memcg, ULLONG_MAX);
res_counter_set_soft_limit(&memcg->res, ULLONG_MAX);
}
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
/* Handlers for move charge at task migration. */ /* Handlers for move charge at task migration. */
#define PRECHARGE_COUNT_AT_ONCE 256 #define PRECHARGE_COUNT_AT_ONCE 256
@ -7005,16 +7027,17 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
/* /*
* Cgroup retains root cgroups across [un]mount cycles making it necessary * Cgroup retains root cgroups across [un]mount cycles making it necessary
* to verify sane_behavior flag on each mount attempt. * to verify whether we're attached to the default hierarchy on each mount
* attempt.
*/ */
static void mem_cgroup_bind(struct cgroup_subsys_state *root_css) static void mem_cgroup_bind(struct cgroup_subsys_state *root_css)
{ {
/* /*
* use_hierarchy is forced with sane_behavior. cgroup core * use_hierarchy is forced on the default hierarchy. cgroup core
* guarantees that @root doesn't have any children, so turning it * guarantees that @root doesn't have any children, so turning it
* on for the root memcg is enough. * on for the root memcg is enough.
*/ */
if (cgroup_sane_behavior(root_css->cgroup)) if (cgroup_on_dfl(root_css->cgroup))
mem_cgroup_from_css(root_css)->use_hierarchy = true; mem_cgroup_from_css(root_css)->use_hierarchy = true;
} }
@ -7023,11 +7046,12 @@ struct cgroup_subsys memory_cgrp_subsys = {
.css_online = mem_cgroup_css_online, .css_online = mem_cgroup_css_online,
.css_offline = mem_cgroup_css_offline, .css_offline = mem_cgroup_css_offline,
.css_free = mem_cgroup_css_free, .css_free = mem_cgroup_css_free,
.css_reset = mem_cgroup_css_reset,
.can_attach = mem_cgroup_can_attach, .can_attach = mem_cgroup_can_attach,
.cancel_attach = mem_cgroup_cancel_attach, .cancel_attach = mem_cgroup_cancel_attach,
.attach = mem_cgroup_move_task, .attach = mem_cgroup_move_task,
.bind = mem_cgroup_bind, .bind = mem_cgroup_bind,
.base_cftypes = mem_cgroup_files, .legacy_cftypes = mem_cgroup_files,
.early_init = 0, .early_init = 0,
}; };
@ -7044,7 +7068,8 @@ __setup("swapaccount=", enable_swap_account);
static void __init memsw_file_init(void) static void __init memsw_file_init(void)
{ {
WARN_ON(cgroup_add_cftypes(&memory_cgrp_subsys, memsw_cgroup_files)); WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys,
memsw_cgroup_files));
} }
static void __init enable_swap_cgroup(void) static void __init enable_swap_cgroup(void)

View File

@ -107,5 +107,5 @@ struct cgroup_subsys net_cls_cgrp_subsys = {
.css_online = cgrp_css_online, .css_online = cgrp_css_online,
.css_free = cgrp_css_free, .css_free = cgrp_css_free,
.attach = cgrp_attach, .attach = cgrp_attach,
.base_cftypes = ss_files, .legacy_cftypes = ss_files,
}; };

View File

@ -249,7 +249,7 @@ struct cgroup_subsys net_prio_cgrp_subsys = {
.css_online = cgrp_css_online, .css_online = cgrp_css_online,
.css_free = cgrp_css_free, .css_free = cgrp_css_free,
.attach = net_prio_attach, .attach = net_prio_attach,
.base_cftypes = ss_files, .legacy_cftypes = ss_files,
}; };
static int netprio_device_event(struct notifier_block *unused, static int netprio_device_event(struct notifier_block *unused,

View File

@ -222,7 +222,7 @@ static struct cftype tcp_files[] = {
static int __init tcp_memcontrol_init(void) static int __init tcp_memcontrol_init(void)
{ {
WARN_ON(cgroup_add_cftypes(&memory_cgrp_subsys, tcp_files)); WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, tcp_files));
return 0; return 0;
} }
__initcall(tcp_memcontrol_init); __initcall(tcp_memcontrol_init);

View File

@ -796,7 +796,7 @@ struct cgroup_subsys devices_cgrp_subsys = {
.css_free = devcgroup_css_free, .css_free = devcgroup_css_free,
.css_online = devcgroup_online, .css_online = devcgroup_online,
.css_offline = devcgroup_offline, .css_offline = devcgroup_offline,
.base_cftypes = dev_cgroup_files, .legacy_cftypes = dev_cgroup_files,
}; };
/** /**