Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

net/sched/cls_api.c has overlapping changes to a call to
nlmsg_parse(), one (from 'net') added rtm_tca_policy instead of NULL
to the 5th argument, and another (from 'net-next') added cb->extack
instead of NULL to the 6th argument.

net/ipv4/ipmr_base.c is a case of a bug fix in 'net' being done to
code which moved (to mr_table_dump)) in 'net-next'.  Thanks to David
Ahern for the heads up.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2018-10-19 11:03:06 -07:00
commit 2e2d6f0342
114 changed files with 569 additions and 876 deletions

View File

@ -1,4 +1,4 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
.. SPDX-License-Identifier: GPL-2.0+
=============
ID Allocation

View File

@ -1,397 +0,0 @@
Valid-License-Identifier: CC-BY-SA-4.0
SPDX-URL: https://spdx.org/licenses/CC-BY-SA-4.0
Usage-Guide:
To use the Creative Commons Attribution Share Alike 4.0 International
license put the following SPDX tag/value pair into a comment according to
the placement guidelines in the licensing rules documentation:
SPDX-License-Identifier: CC-BY-SA-4.0
License-Text:
Creative Commons Attribution-ShareAlike 4.0 International
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of Creative
Commons public licenses does not create a lawyer-client or other
relationship. Creative Commons makes its licenses and related information
available on an "as-is" basis. Creative Commons gives no warranties
regarding its licenses, any material licensed under their terms and
conditions, or any related information. Creative Commons disclaims all
liability for damages resulting from their use to the fullest extent
possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share original
works of authorship and other material subject to copyright and certain
other rights specified in the public license below. The following
considerations are for informational purposes only, are not exhaustive, and
do not form part of our licenses.
Considerations for licensors: Our public licenses are intended for use by
those authorized to give the public permission to use material in ways
otherwise restricted by copyright and certain other rights. Our licenses
are irrevocable. Licensors should read and understand the terms and
conditions of the license they choose before applying it. Licensors should
also secure all rights necessary before applying our licenses so that the
public can reuse the material as expected. Licensors should clearly mark
any material not subject to the license. This includes other CC-licensed
material, or material used under an exception or limitation to
copyright. More considerations for licensors :
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public licenses, a
licensor grants the public permission to use the licensed material under
specified terms and conditions. If the licensor's permission is not
necessary for any reason - for example, because of any applicable exception
or limitation to copyright - then that use is not regulated by the
license. Our licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of the licensed
material may still be restricted for other reasons, including because
others have copyright or other rights in the material. A licensor may make
special requests, such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to respect those
requests where reasonable. More considerations for the public :
wiki.creativecommons.org/Considerations_for_licensees
Creative Commons Attribution-ShareAlike 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to
be bound by the terms and conditions of this Creative Commons
Attribution-ShareAlike 4.0 International Public License ("Public
License"). To the extent this Public License may be interpreted as a
contract, You are granted the Licensed Rights in consideration of Your
acceptance of these terms and conditions, and the Licensor grants You such
rights in consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 - Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material and
in which the Licensed Material is translated, altered, arranged,
transformed, or otherwise modified in a manner requiring permission
under the Copyright and Similar Rights held by the Licensor. For
purposes of this Public License, where the Licensed Material is a
musical work, performance, or sound recording, Adapted Material is
always produced where the Licensed Material is synched in timed
relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright and
Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. BY-SA Compatible License means a license listed at
creativecommons.org/compatiblelicenses, approved by Creative Commons
as essentially the equivalent of this Public License.
d. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
e. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright Treaty
adopted on December 20, 1996, and/or similar international
agreements.
f. Exceptions and Limitations means fair use, fair dealing, and/or any
other exception or limitation to Copyright and Similar Rights that
applies to Your use of the Licensed Material.
g. License Elements means the license attributes listed in the name of
a Creative Commons Public License. The License Elements of this
Public License are Attribution and ShareAlike.
h. Licensed Material means the artistic or literary work, database, or
other material to which the Licensor applied this Public License.
i. Licensed Rights means the rights granted to You subject to the terms
and conditions of this Public License, which are limited to all
Copyright and Similar Rights that apply to Your use of the Licensed
Material and that the Licensor has authority to license.
j. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
k. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such as
reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the public
may access the material from a place and at a time individually
chosen by them.
l. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially equivalent
rights anywhere in the world. m. You means the individual or entity
exercising the Licensed Rights under this Public License. Your has a
corresponding meaning.
Section 2 - Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License, the
Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
A. reproduce and Share the Licensed Material, in whole or in part; and
B. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with its
terms and conditions.
3. Term. The term of this Public License is specified in Section 6(a).
4. Media and formats; technical modifications allowed. The Licensor
authorizes You to exercise the Licensed Rights in all media and
formats whether now known or hereafter created, and to make
technical modifications necessary to do so. The Licensor waives
and/or agrees not to assert any right or authority to forbid You
from making technical modifications necessary to exercise the
Licensed Rights, including technical modifications necessary to
circumvent Effective Technological Measures. For purposes of
this Public License, simply making modifications authorized by
this Section 2(a)(4) never produces Adapted Material.
5. Downstream recipients.
A. Offer from the Licensor - Licensed Material. Every recipient
of the Licensed Material automatically receives an offer
from the Licensor to exercise the Licensed Rights under the
terms and conditions of this Public License.
B. Additional offer from the Licensor - Adapted Material. Every
recipient of Adapted Material from You automatically
receives an offer from the Licensor to exercise the Licensed
Rights in the Adapted Material under the conditions of the
Adapter's License You apply.
C. No downstream restrictions. You may not offer or impose any
additional or different terms or conditions on, or apply any
Effective Technological Measures to, the Licensed Material
if doing so restricts exercise of the Licensed Rights by any
recipient of the Licensed Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You are,
or that Your use of the Licensed Material is, connected with, or
sponsored, endorsed, or granted official status by, the Licensor
or others designated to receive attribution as provided in
Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not licensed
under this Public License, nor are publicity, privacy, and/or
other similar personality rights; however, to the extent
possible, the Licensor waives and/or agrees not to assert any
such rights held by the Licensor to the limited extent necessary
to allow You to exercise the Licensed Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this Public
License.
3. To the extent possible, the Licensor waives any right to collect
royalties from You for the exercise of the Licensed Rights,
whether directly or through a collecting society under any
voluntary or waivable statutory or compulsory licensing
scheme. In all other cases the Licensor expressly reserves any
right to collect such royalties.
Section 3 - License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified form),
You must:
A. retain the following if it is supplied by the Licensor with
the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by the
Licensor (including by pseudonym if designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of warranties;
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
B. indicate if You modified the Licensed Material and retain an
indication of any previous modifications; and
C. indicate the Licensed Material is licensed under this Public
License, and include the text of, or the URI or hyperlink to,
this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable. b. ShareAlike.In addition to the
conditions in Section 3(a), if You Share Adapted Material You
produce, the following conditions also apply.
1. The Adapter's License You apply must be a Creative Commons
license with the same License Elements, this version or
later, or a BY-SA Compatible License.
2. You must include the text of, or the URI or hyperlink to, the
Adapter's License You apply. You may satisfy this condition
in any reasonable manner based on the medium, means, and
context in which You Share Adapted Material.
3. You may not offer or impose any additional or different terms
or conditions on, or apply any Effective Technological
Measures to, Adapted Material that restrict exercise of the
rights granted under the Adapter's License You apply.
Section 4 - Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that apply to
Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to
extract, reuse, reproduce, and Share all or a substantial portion of
the contents of the database;
b. if You include all or a substantial portion of the database contents
in a database in which You have Sui Generis Database Rights, then
the database in which You have Sui Generis Database Rights (but not
its individual contents) is Adapted Material, including for purposes
of Section 3(b); and
c. You must comply with the conditions in Section 3(a) if You Share all
or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 - Disclaimer of Warranties and Limitation of Liability.
a. Unless otherwise separately undertaken by the Licensor, to the
extent possible, the Licensor offers the Licensed Material as-is and
as-available, and makes no representations or warranties of any kind
concerning the Licensed Material, whether express, implied,
statutory, or other. This includes, without limitation, warranties
of title, merchantability, fitness for a particular purpose,
non-infringement, absence of latent or other defects, accuracy, or
the presence or absence of errors, whether or not known or
discoverable. Where disclaimers of warranties are not allowed in
full or in part, this disclaimer may not apply to You.
b. To the extent possible, in no event will the Licensor be liable to
You on any legal theory (including, without limitation, negligence)
or otherwise for any direct, special, indirect, incidental,
consequential, punitive, exemplary, or other losses, costs,
expenses, or damages arising out of this Public License or use of
the Licensed Material, even if the Licensor has been advised of the
possibility of such losses, costs, expenses, or damages. Where a
limitation of liability is not allowed in full or in part, this
limitation may not apply to You.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent possible,
most closely approximates an absolute disclaimer and waiver of all
liability.
Section 6 - Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided it
is cured within 30 days of Your discovery of the violation; or
2. upon express reinstatement by the Licensor.
c. For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations of
this Public License.
d. For the avoidance of doubt, the Licensor may also offer the Licensed
Material under separate terms or conditions or stop distributing the
Licensed Material at any time; however, doing so will not terminate
this Public License.
e. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
Section 7 - Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different terms
or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 - Interpretation.
a. For the avoidance of doubt, this Public License does not, and shall
not be interpreted to, reduce, limit, restrict, or impose conditions
on any use of the Licensed Material that could lawfully be made
without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted as
a limitation upon, or waiver of, any privileges and immunities that
apply to the Licensor or You, including from the legal processes of
any jurisdiction or authority.
Creative Commons is not a party to its public licenses. Notwithstanding,
Creative Commons may elect to apply one of its public licenses to material
it publishes and in those instances will be considered the "Licensor." The
text of the Creative Commons public licenses is dedicated to the public
domain under the CC0 Public Domain Dedication. Except for the limited
purpose of indicating that material is shared under a Creative Commons
public license or as otherwise permitted by the Creative Commons policies
published at creativecommons.org/policies, Creative Commons does not
authorize the use of the trademark "Creative Commons" or any other
trademark or logo of Creative Commons without its prior written consent
including, without limitation, in connection with any unauthorized
modifications to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For the
avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.

View File

@ -10161,7 +10161,6 @@ L: netdev@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next.git
S: Maintained
F: net/core/flow.c
F: net/xfrm/
F: net/key/
F: net/ipv4/xfrm*
@ -13101,7 +13100,7 @@ SELINUX SECURITY MODULE
M: Paul Moore <paul@paul-moore.com>
M: Stephen Smalley <sds@tycho.nsa.gov>
M: Eric Paris <eparis@parisplace.org>
L: selinux@tycho.nsa.gov (moderated for non-subscribers)
L: selinux@vger.kernel.org
W: https://selinuxproject.org
W: https://github.com/SELinuxProject
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux.git

View File

@ -2,7 +2,7 @@
VERSION = 4
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc7
EXTRAVERSION = -rc8
NAME = Merciless Moray
# *DOCUMENTATION*

View File

@ -478,15 +478,15 @@ static const struct coproc_reg cp15_regs[] = {
/* ICC_SGI1R */
{ CRm64(12), Op1( 0), is64, access_gic_sgi},
/* ICC_ASGI1R */
{ CRm64(12), Op1( 1), is64, access_gic_sgi},
/* ICC_SGI0R */
{ CRm64(12), Op1( 2), is64, access_gic_sgi},
/* VBAR: swapped by interrupt.S. */
{ CRn(12), CRm( 0), Op1( 0), Op2( 0), is32,
NULL, reset_val, c12_VBAR, 0x00000000 },
/* ICC_ASGI1R */
{ CRm64(12), Op1( 1), is64, access_gic_sgi},
/* ICC_SGI0R */
{ CRm64(12), Op1( 2), is64, access_gic_sgi},
/* ICC_SRE */
{ CRn(12), CRm(12), Op1( 0), Op2(5), is32, access_gic_sre },

View File

@ -966,6 +966,12 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
return 0;
}
static int armv8pmu_filter_match(struct perf_event *event)
{
unsigned long evtype = event->hw.config_base & ARMV8_PMU_EVTYPE_EVENT;
return evtype != ARMV8_PMUV3_PERFCTR_CHAIN;
}
static void armv8pmu_reset(void *info)
{
struct arm_pmu *cpu_pmu = (struct arm_pmu *)info;
@ -1114,6 +1120,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->stop = armv8pmu_stop,
cpu_pmu->reset = armv8pmu_reset,
cpu_pmu->set_event_filter = armv8pmu_set_event_filter;
cpu_pmu->filter_match = armv8pmu_filter_match;
return 0;
}

View File

@ -64,6 +64,9 @@
#include <asm/xen/hypervisor.h>
#include <asm/mmu_context.h>
static int num_standard_resources;
static struct resource *standard_resources;
phys_addr_t __fdt_pointer __initdata;
/*
@ -206,14 +209,19 @@ static void __init request_standard_resources(void)
{
struct memblock_region *region;
struct resource *res;
unsigned long i = 0;
kernel_code.start = __pa_symbol(_text);
kernel_code.end = __pa_symbol(__init_begin - 1);
kernel_data.start = __pa_symbol(_sdata);
kernel_data.end = __pa_symbol(_end - 1);
num_standard_resources = memblock.memory.cnt;
standard_resources = alloc_bootmem_low(num_standard_resources *
sizeof(*standard_resources));
for_each_memblock(memory, region) {
res = alloc_bootmem_low(sizeof(*res));
res = &standard_resources[i++];
if (memblock_is_nomap(region)) {
res->name = "reserved";
res->flags = IORESOURCE_MEM;
@ -243,36 +251,26 @@ static void __init request_standard_resources(void)
static int __init reserve_memblock_reserved_regions(void)
{
phys_addr_t start, end, roundup_end = 0;
struct resource *mem, *res;
u64 i;
u64 i, j;
for_each_reserved_mem_region(i, &start, &end) {
if (end <= roundup_end)
continue; /* done already */
for (i = 0; i < num_standard_resources; ++i) {
struct resource *mem = &standard_resources[i];
phys_addr_t r_start, r_end, mem_size = resource_size(mem);
start = __pfn_to_phys(PFN_DOWN(start));
end = __pfn_to_phys(PFN_UP(end)) - 1;
roundup_end = end;
res = kzalloc(sizeof(*res), GFP_ATOMIC);
if (WARN_ON(!res))
return -ENOMEM;
res->start = start;
res->end = end;
res->name = "reserved";
res->flags = IORESOURCE_MEM;
mem = request_resource_conflict(&iomem_resource, res);
/*
* We expected memblock_reserve() regions to conflict with
* memory created by request_standard_resources().
*/
if (WARN_ON_ONCE(!mem))
if (!memblock_is_region_reserved(mem->start, mem_size))
continue;
kfree(res);
reserve_region_with_split(mem, start, end, "reserved");
for_each_reserved_mem_region(j, &r_start, &r_end) {
resource_size_t start, end;
start = max(PFN_PHYS(PFN_DOWN(r_start)), mem->start);
end = min(PFN_PHYS(PFN_UP(r_end)) - 1, mem->end);
if (start > mem->end || end < mem->start)
continue;
reserve_region_with_split(mem, start, end, "reserved");
}
}
return 0;

View File

@ -426,7 +426,7 @@ void unwind_frame_init_task(struct unwind_frame_info *info,
r.gr[30] = get_parisc_stackpointer();
regs = &r;
}
unwind_frame_init(info, task, &r);
unwind_frame_init(info, task, regs);
} else {
unwind_frame_init_from_blocked_task(info, task);
}

View File

@ -114,7 +114,7 @@
*/
#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
_PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \
_PAGE_SOFT_DIRTY)
_PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
/*
* user access blocked by key
*/
@ -132,7 +132,7 @@
*/
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
_PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
_PAGE_SOFT_DIRTY)
_PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
#define H_PTE_PKEY (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \
H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4)

View File

@ -28,7 +28,7 @@ typedef struct {
unsigned short sock_id; /* physical package */
unsigned short core_id;
unsigned short max_cache_id; /* groupings of highest shared cache */
unsigned short proc_id; /* strand (aka HW thread) id */
signed short proc_id; /* strand (aka HW thread) id */
} cpuinfo_sparc;
DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);

View File

@ -427,8 +427,9 @@
#define __NR_preadv2 358
#define __NR_pwritev2 359
#define __NR_statx 360
#define __NR_io_pgetevents 361
#define NR_syscalls 361
#define NR_syscalls 362
/* Bitmask values returned from kern_features system call. */
#define KERN_FEATURE_MIXED_MODE_STACK 0x00000001

View File

@ -115,8 +115,8 @@ static int auxio_probe(struct platform_device *dev)
auxio_devtype = AUXIO_TYPE_SBUS;
size = 1;
} else {
printk("auxio: Unknown parent bus type [%pOFn]\n",
dp->parent);
printk("auxio: Unknown parent bus type [%s]\n",
dp->parent->name);
return -ENODEV;
}
auxio_register = of_ioremap(&dev->resource[0], 0, size, "auxio");

View File

@ -24,6 +24,7 @@
#include <asm/cpudata.h>
#include <linux/uaccess.h>
#include <linux/atomic.h>
#include <linux/sched/clock.h>
#include <asm/nmi.h>
#include <asm/pcr.h>
#include <asm/cacheflush.h>
@ -927,6 +928,8 @@ static void read_in_all_counters(struct cpu_hw_events *cpuc)
sparc_perf_event_update(cp, &cp->hw,
cpuc->current_idx[i]);
cpuc->current_idx[i] = PIC_NO_INDEX;
if (cp->hw.state & PERF_HES_STOPPED)
cp->hw.state |= PERF_HES_ARCH;
}
}
}
@ -959,10 +962,12 @@ static void calculate_single_pcr(struct cpu_hw_events *cpuc)
enc = perf_event_get_enc(cpuc->events[i]);
cpuc->pcr[0] &= ~mask_for_index(idx);
if (hwc->state & PERF_HES_STOPPED)
if (hwc->state & PERF_HES_ARCH) {
cpuc->pcr[0] |= nop_for_index(idx);
else
} else {
cpuc->pcr[0] |= event_encoding(enc, idx);
hwc->state = 0;
}
}
out:
cpuc->pcr[0] |= cpuc->event[0]->hw.config_base;
@ -988,6 +993,9 @@ static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc)
cpuc->current_idx[i] = idx;
if (cp->hw.state & PERF_HES_ARCH)
continue;
sparc_pmu_start(cp, PERF_EF_RELOAD);
}
out:
@ -1079,6 +1087,8 @@ static void sparc_pmu_start(struct perf_event *event, int flags)
event->hw.state = 0;
sparc_pmu_enable_event(cpuc, &event->hw, idx);
perf_event_update_userpage(event);
}
static void sparc_pmu_stop(struct perf_event *event, int flags)
@ -1371,9 +1381,9 @@ static int sparc_pmu_add(struct perf_event *event, int ef_flags)
cpuc->events[n0] = event->hw.event_base;
cpuc->current_idx[n0] = PIC_NO_INDEX;
event->hw.state = PERF_HES_UPTODATE;
event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
if (!(ef_flags & PERF_EF_START))
event->hw.state |= PERF_HES_STOPPED;
event->hw.state |= PERF_HES_ARCH;
/*
* If group events scheduling transaction was started,
@ -1603,6 +1613,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
struct perf_sample_data data;
struct cpu_hw_events *cpuc;
struct pt_regs *regs;
u64 finish_clock;
u64 start_clock;
int i;
if (!atomic_read(&active_events))
@ -1616,6 +1628,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
return NOTIFY_DONE;
}
start_clock = sched_clock();
regs = args->regs;
cpuc = this_cpu_ptr(&cpu_hw_events);
@ -1654,6 +1668,10 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
sparc_pmu_stop(event, 0);
}
finish_clock = sched_clock();
perf_sample_event_took(finish_clock - start_clock);
return NOTIFY_STOP;
}

View File

@ -41,8 +41,8 @@ static int power_probe(struct platform_device *op)
power_reg = of_ioremap(res, 0, 0x4, "power");
printk(KERN_INFO "%pOFn: Control reg at %llx\n",
op->dev.of_node, res->start);
printk(KERN_INFO "%s: Control reg at %llx\n",
op->dev.of_node->name, res->start);
if (has_button_interrupt(irq, op->dev.of_node)) {
if (request_irq(irq,

View File

@ -68,8 +68,8 @@ static void __init sparc32_path_component(struct device_node *dp, char *tmp_buf)
return;
regs = rprop->value;
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
regs->which_io, regs->phys_addr);
}
@ -84,8 +84,8 @@ static void __init sbus_path_component(struct device_node *dp, char *tmp_buf)
return;
regs = prop->value;
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
regs->which_io,
regs->phys_addr);
}
@ -104,13 +104,13 @@ static void __init pci_path_component(struct device_node *dp, char *tmp_buf)
regs = prop->value;
devfn = (regs->phys_hi >> 8) & 0xff;
if (devfn & 0x07) {
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
devfn >> 3,
devfn & 0x07);
} else {
sprintf(tmp_buf, "%pOFn@%x",
dp,
sprintf(tmp_buf, "%s@%x",
dp->name,
devfn >> 3);
}
}
@ -127,8 +127,8 @@ static void __init ebus_path_component(struct device_node *dp, char *tmp_buf)
regs = prop->value;
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
regs->which_io, regs->phys_addr);
}
@ -167,8 +167,8 @@ static void __init ambapp_path_component(struct device_node *dp, char *tmp_buf)
return;
device = prop->value;
sprintf(tmp_buf, "%pOFn:%d:%d@%x,%x",
dp, *vendor, *device,
sprintf(tmp_buf, "%s:%d:%d@%x,%x",
dp->name, *vendor, *device,
*intr, reg0);
}
@ -201,7 +201,7 @@ char * __init build_path_component(struct device_node *dp)
tmp_buf[0] = '\0';
__build_path_component(dp, tmp_buf);
if (tmp_buf[0] == '\0')
snprintf(tmp_buf, sizeof(tmp_buf), "%pOFn", dp);
strcpy(tmp_buf, dp->name);
n = prom_early_alloc(strlen(tmp_buf) + 1);
strcpy(n, tmp_buf);

View File

@ -82,8 +82,8 @@ static void __init sun4v_path_component(struct device_node *dp, char *tmp_buf)
regs = rprop->value;
if (!of_node_is_root(dp->parent)) {
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
(unsigned int) (regs->phys_addr >> 32UL),
(unsigned int) (regs->phys_addr & 0xffffffffUL));
return;
@ -97,17 +97,17 @@ static void __init sun4v_path_component(struct device_node *dp, char *tmp_buf)
const char *prefix = (type == 0) ? "m" : "i";
if (low_bits)
sprintf(tmp_buf, "%pOFn@%s%x,%x",
dp, prefix,
sprintf(tmp_buf, "%s@%s%x,%x",
dp->name, prefix,
high_bits, low_bits);
else
sprintf(tmp_buf, "%pOFn@%s%x",
dp,
sprintf(tmp_buf, "%s@%s%x",
dp->name,
prefix,
high_bits);
} else if (type == 12) {
sprintf(tmp_buf, "%pOFn@%x",
dp, high_bits);
sprintf(tmp_buf, "%s@%x",
dp->name, high_bits);
}
}
@ -122,8 +122,8 @@ static void __init sun4u_path_component(struct device_node *dp, char *tmp_buf)
regs = prop->value;
if (!of_node_is_root(dp->parent)) {
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
(unsigned int) (regs->phys_addr >> 32UL),
(unsigned int) (regs->phys_addr & 0xffffffffUL));
return;
@ -138,8 +138,8 @@ static void __init sun4u_path_component(struct device_node *dp, char *tmp_buf)
if (tlb_type >= cheetah)
mask = 0x7fffff;
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
*(u32 *)prop->value,
(unsigned int) (regs->phys_addr & mask));
}
@ -156,8 +156,8 @@ static void __init sbus_path_component(struct device_node *dp, char *tmp_buf)
return;
regs = prop->value;
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
regs->which_io,
regs->phys_addr);
}
@ -176,13 +176,13 @@ static void __init pci_path_component(struct device_node *dp, char *tmp_buf)
regs = prop->value;
devfn = (regs->phys_hi >> 8) & 0xff;
if (devfn & 0x07) {
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
devfn >> 3,
devfn & 0x07);
} else {
sprintf(tmp_buf, "%pOFn@%x",
dp,
sprintf(tmp_buf, "%s@%x",
dp->name,
devfn >> 3);
}
}
@ -203,8 +203,8 @@ static void __init upa_path_component(struct device_node *dp, char *tmp_buf)
if (!prop)
return;
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
*(u32 *) prop->value,
(unsigned int) (regs->phys_addr & 0xffffffffUL));
}
@ -221,7 +221,7 @@ static void __init vdev_path_component(struct device_node *dp, char *tmp_buf)
regs = prop->value;
sprintf(tmp_buf, "%pOFn@%x", dp, *regs);
sprintf(tmp_buf, "%s@%x", dp->name, *regs);
}
/* "name@addrhi,addrlo" */
@ -236,8 +236,8 @@ static void __init ebus_path_component(struct device_node *dp, char *tmp_buf)
regs = prop->value;
sprintf(tmp_buf, "%pOFn@%x,%x",
dp,
sprintf(tmp_buf, "%s@%x,%x",
dp->name,
(unsigned int) (regs->phys_addr >> 32UL),
(unsigned int) (regs->phys_addr & 0xffffffffUL));
}
@ -257,8 +257,8 @@ static void __init i2c_path_component(struct device_node *dp, char *tmp_buf)
/* This actually isn't right... should look at the #address-cells
* property of the i2c bus node etc. etc.
*/
sprintf(tmp_buf, "%pOFn@%x,%x",
dp, regs[0], regs[1]);
sprintf(tmp_buf, "%s@%x,%x",
dp->name, regs[0], regs[1]);
}
/* "name@reg0[,reg1]" */
@ -274,11 +274,11 @@ static void __init usb_path_component(struct device_node *dp, char *tmp_buf)
regs = prop->value;
if (prop->length == sizeof(u32) || regs[1] == 1) {
sprintf(tmp_buf, "%pOFn@%x",
dp, regs[0]);
sprintf(tmp_buf, "%s@%x",
dp->name, regs[0]);
} else {
sprintf(tmp_buf, "%pOFn@%x,%x",
dp, regs[0], regs[1]);
sprintf(tmp_buf, "%s@%x,%x",
dp->name, regs[0], regs[1]);
}
}
@ -295,11 +295,11 @@ static void __init ieee1394_path_component(struct device_node *dp, char *tmp_buf
regs = prop->value;
if (regs[2] || regs[3]) {
sprintf(tmp_buf, "%pOFn@%08x%08x,%04x%08x",
dp, regs[0], regs[1], regs[2], regs[3]);
sprintf(tmp_buf, "%s@%08x%08x,%04x%08x",
dp->name, regs[0], regs[1], regs[2], regs[3]);
} else {
sprintf(tmp_buf, "%pOFn@%08x%08x",
dp, regs[0], regs[1]);
sprintf(tmp_buf, "%s@%08x%08x",
dp->name, regs[0], regs[1]);
}
}
@ -361,7 +361,7 @@ char * __init build_path_component(struct device_node *dp)
tmp_buf[0] = '\0';
__build_path_component(dp, tmp_buf);
if (tmp_buf[0] == '\0')
snprintf(tmp_buf, sizeof(tmp_buf), "%pOFn", dp);
strcpy(tmp_buf, dp->name);
n = prom_early_alloc(strlen(tmp_buf) + 1);
strcpy(n, tmp_buf);

View File

@ -84,8 +84,9 @@ __handle_signal:
ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
sethi %hi(0xf << 20), %l4
and %l1, %l4, %l4
andn %l1, %l4, %l1
ba,pt %xcc, __handle_preemption_continue
andn %l1, %l4, %l1
srl %l4, 20, %l4
/* When returning from a NMI (%pil==15) interrupt we want to
* avoid running softirqs, doing IRQ tracing, preempting, etc.

View File

@ -90,4 +90,4 @@ sys_call_table:
/*345*/ .long sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
/*350*/ .long sys_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
/*355*/ .long sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
/*360*/ .long sys_statx
/*360*/ .long sys_statx, sys_io_pgetevents

View File

@ -91,7 +91,7 @@ sys_call_table32:
.word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
/*350*/ .word sys32_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
.word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range, compat_sys_preadv2, compat_sys_pwritev2
/*360*/ .word sys_statx
/*360*/ .word sys_statx, compat_sys_io_pgetevents
#endif /* CONFIG_COMPAT */
@ -173,4 +173,4 @@ sys_call_table:
.word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
/*350*/ .word sys64_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
.word sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
/*360*/ .word sys_statx
/*360*/ .word sys_statx, sys_io_pgetevents

View File

@ -33,9 +33,19 @@
#define TICK_PRIV_BIT (1ULL << 63)
#endif
#ifdef CONFIG_SPARC64
#define SYSCALL_STRING \
"ta 0x6d;" \
"sub %%g0, %%o0, %%o0;" \
"bcs,a 1f;" \
" sub %%g0, %%o0, %%o0;" \
"1:"
#else
#define SYSCALL_STRING \
"ta 0x10;" \
"bcs,a 1f;" \
" sub %%g0, %%o0, %%o0;" \
"1:"
#endif
#define SYSCALL_CLOBBERS \
"f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7", \

View File

@ -262,7 +262,9 @@ static __init int vdso_setup(char *s)
unsigned long val;
err = kstrtoul(s, 10, &val);
if (err)
return err;
vdso_enabled = val;
return err;
return 0;
}
__setup("vdso=", vdso_setup);

View File

@ -124,7 +124,7 @@
*/
#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \
_PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \
_PAGE_SOFT_DIRTY)
_PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
/*

View File

@ -436,14 +436,18 @@ static inline struct kvm_svm *to_kvm_svm(struct kvm *kvm)
static inline bool svm_sev_enabled(void)
{
return max_sev_asid;
return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0;
}
static inline bool sev_guest(struct kvm *kvm)
{
#ifdef CONFIG_KVM_AMD_SEV
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
return sev->active;
#else
return false;
#endif
}
static inline int sev_get_asid(struct kvm *kvm)

View File

@ -1572,8 +1572,12 @@ static int vmx_hv_remote_flush_tlb(struct kvm *kvm)
goto out;
}
/*
* FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE hypercall needs the address of the
* base of EPT PML4 table, strip off EPT configuration information.
*/
ret = hyperv_flush_guest_mapping(
to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer);
to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer & PAGE_MASK);
out:
spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);

View File

@ -310,6 +310,7 @@ static void scale_up(struct rq_wb *rwb)
rq_depth_scale_up(&rwb->rq_depth);
calc_wb_limits(rwb);
rwb->unknown_cnt = 0;
rwb_wake_all(rwb);
rwb_trace_step(rwb, "scale up");
}
@ -318,7 +319,6 @@ static void scale_down(struct rq_wb *rwb, bool hard_throttle)
rq_depth_scale_down(&rwb->rq_depth, hard_throttle);
calc_wb_limits(rwb);
rwb->unknown_cnt = 0;
rwb_wake_all(rwb);
rwb_trace_step(rwb, "scale down");
}

View File

@ -36,6 +36,10 @@ MODULE_VERSION(DRV_MODULE_VERSION);
#define VDC_TX_RING_SIZE 512
#define VDC_DEFAULT_BLK_SIZE 512
#define MAX_XFER_BLKS (128 * 1024)
#define MAX_XFER_SIZE (MAX_XFER_BLKS / VDC_DEFAULT_BLK_SIZE)
#define MAX_RING_COOKIES ((MAX_XFER_BLKS / PAGE_SIZE) + 2)
#define WAITING_FOR_LINK_UP 0x01
#define WAITING_FOR_TX_SPACE 0x02
#define WAITING_FOR_GEN_CMD 0x04
@ -450,7 +454,7 @@ static int __send_request(struct request *req)
{
struct vdc_port *port = req->rq_disk->private_data;
struct vio_dring_state *dr = &port->vio.drings[VIO_DRIVER_TX_RING];
struct scatterlist sg[port->ring_cookies];
struct scatterlist sg[MAX_RING_COOKIES];
struct vdc_req_entry *rqe;
struct vio_disk_desc *desc;
unsigned int map_perm;
@ -458,6 +462,9 @@ static int __send_request(struct request *req)
u64 len;
u8 op;
if (WARN_ON(port->ring_cookies > MAX_RING_COOKIES))
return -EINVAL;
map_perm = LDC_MAP_SHADOW | LDC_MAP_DIRECT | LDC_MAP_IO;
if (rq_data_dir(req) == READ) {
@ -984,9 +991,8 @@ static int vdc_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
goto err_out_free_port;
port->vdisk_block_size = VDC_DEFAULT_BLK_SIZE;
port->max_xfer_size = ((128 * 1024) / port->vdisk_block_size);
port->ring_cookies = ((port->max_xfer_size *
port->vdisk_block_size) / PAGE_SIZE) + 2;
port->max_xfer_size = MAX_XFER_SIZE;
port->ring_cookies = MAX_RING_COOKIES;
err = vio_ldc_alloc(&port->vio, &vdc_ldc_cfg, port);
if (err)

View File

@ -1434,8 +1434,16 @@ static void __init sun4i_ccu_init(struct device_node *node,
return;
}
/* Force the PLL-Audio-1x divider to 1 */
val = readl(reg + SUN4I_PLL_AUDIO_REG);
/*
* Force VCO and PLL bias current to lowest setting. Higher
* settings interfere with sigma-delta modulation and result
* in audible noise and distortions when using SPDIF or I2S.
*/
val &= ~GENMASK(25, 16);
/* Force the PLL-Audio-1x divider to 1 */
val &= ~GENMASK(29, 26);
writel(val | (1 << 26), reg + SUN4I_PLL_AUDIO_REG);

View File

@ -567,9 +567,9 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
struct drm_mode_crtc *crtc_req = data;
struct drm_crtc *crtc;
struct drm_plane *plane;
struct drm_connector **connector_set = NULL, *connector;
struct drm_framebuffer *fb = NULL;
struct drm_display_mode *mode = NULL;
struct drm_connector **connector_set, *connector;
struct drm_framebuffer *fb;
struct drm_display_mode *mode;
struct drm_mode_set set;
uint32_t __user *set_connectors_ptr;
struct drm_modeset_acquire_ctx ctx;
@ -598,6 +598,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
mutex_lock(&crtc->dev->mode_config.mutex);
drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
retry:
connector_set = NULL;
fb = NULL;
mode = NULL;
ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx);
if (ret)
goto out;

View File

@ -113,6 +113,9 @@ static const struct edid_quirk {
/* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
{ "AEO", 0, EDID_QUIRK_FORCE_6BPC },
/* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
{ "BOE", 0x78b, EDID_QUIRK_FORCE_6BPC },
/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
@ -4279,7 +4282,7 @@ static void drm_parse_ycbcr420_deep_color_info(struct drm_connector *connector,
struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
dc_mask = db[7] & DRM_EDID_YCBCR420_DC_MASK;
hdmi->y420_dc_modes |= dc_mask;
hdmi->y420_dc_modes = dc_mask;
}
static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,

View File

@ -1580,6 +1580,25 @@ unlock:
}
EXPORT_SYMBOL(drm_fb_helper_ioctl);
static bool drm_fb_pixel_format_equal(const struct fb_var_screeninfo *var_1,
const struct fb_var_screeninfo *var_2)
{
return var_1->bits_per_pixel == var_2->bits_per_pixel &&
var_1->grayscale == var_2->grayscale &&
var_1->red.offset == var_2->red.offset &&
var_1->red.length == var_2->red.length &&
var_1->red.msb_right == var_2->red.msb_right &&
var_1->green.offset == var_2->green.offset &&
var_1->green.length == var_2->green.length &&
var_1->green.msb_right == var_2->green.msb_right &&
var_1->blue.offset == var_2->blue.offset &&
var_1->blue.length == var_2->blue.length &&
var_1->blue.msb_right == var_2->blue.msb_right &&
var_1->transp.offset == var_2->transp.offset &&
var_1->transp.length == var_2->transp.length &&
var_1->transp.msb_right == var_2->transp.msb_right;
}
/**
* drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var
* @var: screeninfo to check
@ -1590,7 +1609,6 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
{
struct drm_fb_helper *fb_helper = info->par;
struct drm_framebuffer *fb = fb_helper->fb;
int depth;
if (var->pixclock != 0 || in_dbg_master())
return -EINVAL;
@ -1610,72 +1628,15 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
return -EINVAL;
}
switch (var->bits_per_pixel) {
case 16:
depth = (var->green.length == 6) ? 16 : 15;
break;
case 32:
depth = (var->transp.length > 0) ? 32 : 24;
break;
default:
depth = var->bits_per_pixel;
break;
}
switch (depth) {
case 8:
var->red.offset = 0;
var->green.offset = 0;
var->blue.offset = 0;
var->red.length = 8;
var->green.length = 8;
var->blue.length = 8;
var->transp.length = 0;
var->transp.offset = 0;
break;
case 15:
var->red.offset = 10;
var->green.offset = 5;
var->blue.offset = 0;
var->red.length = 5;
var->green.length = 5;
var->blue.length = 5;
var->transp.length = 1;
var->transp.offset = 15;
break;
case 16:
var->red.offset = 11;
var->green.offset = 5;
var->blue.offset = 0;
var->red.length = 5;
var->green.length = 6;
var->blue.length = 5;
var->transp.length = 0;
var->transp.offset = 0;
break;
case 24:
var->red.offset = 16;
var->green.offset = 8;
var->blue.offset = 0;
var->red.length = 8;
var->green.length = 8;
var->blue.length = 8;
var->transp.length = 0;
var->transp.offset = 0;
break;
case 32:
var->red.offset = 16;
var->green.offset = 8;
var->blue.offset = 0;
var->red.length = 8;
var->green.length = 8;
var->blue.length = 8;
var->transp.length = 8;
var->transp.offset = 24;
break;
default:
/*
* drm fbdev emulation doesn't support changing the pixel format at all,
* so reject all pixel format changing requests.
*/
if (!drm_fb_pixel_format_equal(var, &info->var)) {
DRM_DEBUG("fbdev emulation doesn't support changing the pixel format\n");
return -EINVAL;
}
return 0;
}
EXPORT_SYMBOL(drm_fb_helper_check_var);

View File

@ -2270,7 +2270,7 @@ EXPORT_SYMBOL(i2c_put_adapter);
*
* Return: NULL if a DMA safe buffer was not obtained. Use msg->buf with PIO.
* Or a valid pointer to be used with DMA. After use, release it by
* calling i2c_release_dma_safe_msg_buf().
* calling i2c_put_dma_safe_msg_buf().
*
* This function must only be called from process context!
*/

View File

@ -46,6 +46,8 @@
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/nospec.h>
#include <linux/uaccess.h>
#include <rdma/ib.h>
@ -1120,6 +1122,7 @@ static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
if (hdr.cmd >= ARRAY_SIZE(ucm_cmd_table))
return -EINVAL;
hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucm_cmd_table));
if (hdr.in + sizeof(hdr) > len)
return -EINVAL;

View File

@ -44,6 +44,8 @@
#include <linux/module.h>
#include <linux/nsproxy.h>
#include <linux/nospec.h>
#include <rdma/rdma_user_cm.h>
#include <rdma/ib_marshall.h>
#include <rdma/rdma_cm.h>
@ -1676,6 +1678,7 @@ static ssize_t ucma_write(struct file *filp, const char __user *buf,
if (hdr.cmd >= ARRAY_SIZE(ucma_cmd_table))
return -EINVAL;
hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucma_cmd_table));
if (hdr.in + sizeof(hdr) > len)
return -EINVAL;

View File

@ -320,9 +320,12 @@ int bcmgenet_mii_probe(struct net_device *dev)
phydev->advertising = phydev->supported;
/* The internal PHY has its link interrupts routed to the
* Ethernet MAC ISRs
* Ethernet MAC ISRs. On GENETv5 there is a hardware issue
* that prevents the signaling of link UP interrupts when
* the link operates at 10Mbps, so fallback to polling for
* those versions of GENET.
*/
if (priv->internal_phy)
if (priv->internal_phy && !GENET_IS_V5(priv))
dev->phydev->irq = PHY_IGNORE_INTERRUPT;
return 0;

View File

@ -452,6 +452,10 @@ struct bufdesc_ex {
* initialisation.
*/
#define FEC_QUIRK_MIB_CLEAR (1 << 15)
/* Only i.MX25/i.MX27/i.MX28 controller supports FRBR,FRSR registers,
* those FIFO receive registers are resolved in other platforms.
*/
#define FEC_QUIRK_HAS_FRREG (1 << 16)
struct bufdesc_prop {
int qid;

View File

@ -91,14 +91,16 @@ static struct platform_device_id fec_devtype[] = {
.driver_data = 0,
}, {
.name = "imx25-fec",
.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR,
.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
FEC_QUIRK_HAS_FRREG,
}, {
.name = "imx27-fec",
.driver_data = FEC_QUIRK_MIB_CLEAR,
.driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
}, {
.name = "imx28-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC,
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_FRREG,
}, {
.name = "imx6q-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
@ -2162,7 +2164,13 @@ static void fec_enet_get_regs(struct net_device *ndev,
memset(buf, 0, regs->len);
for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
off = fec_enet_register_offset[i] / 4;
off = fec_enet_register_offset[i];
if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
!(fep->quirks & FEC_QUIRK_HAS_FRREG))
continue;
off >>= 2;
buf[off] = readl(&theregs[off]);
}
}

View File

@ -433,10 +433,9 @@ static inline u16 mlx5e_icosq_wrap_cnt(struct mlx5e_icosq *sq)
static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq,
struct mlx5_wq_cyc *wq,
u16 pi, u16 frag_pi)
u16 pi, u16 nnops)
{
struct mlx5e_sq_wqe_info *edge_wi, *wi = &sq->db.ico_wqe[pi];
u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
edge_wi = wi + nnops;
@ -455,15 +454,14 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
struct mlx5_wq_cyc *wq = &sq->wq;
struct mlx5e_umr_wqe *umr_wqe;
u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1);
u16 pi, frag_pi;
u16 pi, contig_wqebbs_room;
int err;
int i;
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
if (unlikely(frag_pi + MLX5E_UMR_WQEBBS > mlx5_wq_cyc_get_frag_size(wq))) {
mlx5e_fill_icosq_frag_edge(sq, wq, pi, frag_pi);
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
if (unlikely(contig_wqebbs_room < MLX5E_UMR_WQEBBS)) {
mlx5e_fill_icosq_frag_edge(sq, wq, pi, contig_wqebbs_room);
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
}

View File

@ -290,10 +290,9 @@ dma_unmap_wqe_err:
static inline void mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq,
struct mlx5_wq_cyc *wq,
u16 pi, u16 frag_pi)
u16 pi, u16 nnops)
{
struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
edge_wi = wi + nnops;
@ -348,8 +347,8 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5e_tx_wqe_info *wi;
struct mlx5e_sq_stats *stats = sq->stats;
u16 headlen, ihs, contig_wqebbs_room;
u16 ds_cnt, ds_cnt_inl = 0;
u16 headlen, ihs, frag_pi;
u8 num_wqebbs, opcode;
u32 num_bytes;
int num_dma;
@ -386,9 +385,9 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
}
num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
if (unlikely(contig_wqebbs_room < num_wqebbs)) {
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
mlx5e_sq_fetch_wqe(sq, &wqe, &pi);
}
@ -636,7 +635,7 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5e_tx_wqe_info *wi;
struct mlx5e_sq_stats *stats = sq->stats;
u16 headlen, ihs, pi, frag_pi;
u16 headlen, ihs, pi, contig_wqebbs_room;
u16 ds_cnt, ds_cnt_inl = 0;
u8 num_wqebbs, opcode;
u32 num_bytes;
@ -672,13 +671,14 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
}
num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
if (unlikely(contig_wqebbs_room < num_wqebbs)) {
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
}
mlx5i_sq_fetch_wqe(sq, &wqe, &pi);
mlx5i_sq_fetch_wqe(sq, &wqe, pi);
/* fill wqe */
wi = &sq->db.wqe_info[pi];

View File

@ -273,7 +273,7 @@ static void eq_pf_process(struct mlx5_eq *eq)
case MLX5_PFAULT_SUBTYPE_WQE:
/* WQE based event */
pfault->type =
be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24;
(be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7;
pfault->token =
be32_to_cpu(pf_eqe->wqe.token);
pfault->wqe.wq_num =

View File

@ -245,7 +245,7 @@ static void *mlx5_fpga_ipsec_cmd_exec(struct mlx5_core_dev *mdev,
return ERR_PTR(res);
}
/* Context will be freed by wait func after completion */
/* Context should be freed by the caller after completion. */
return context;
}
@ -418,10 +418,8 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
cmd.cmd = htonl(MLX5_FPGA_IPSEC_CMD_OP_SET_CAP);
cmd.flags = htonl(flags);
context = mlx5_fpga_ipsec_cmd_exec(mdev, &cmd, sizeof(cmd));
if (IS_ERR(context)) {
err = PTR_ERR(context);
goto out;
}
if (IS_ERR(context))
return PTR_ERR(context);
err = mlx5_fpga_ipsec_cmd_wait(context);
if (err)
@ -435,6 +433,7 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
}
out:
kfree(context);
return err;
}

View File

@ -110,12 +110,11 @@ struct mlx5i_tx_wqe {
static inline void mlx5i_sq_fetch_wqe(struct mlx5e_txqsq *sq,
struct mlx5i_tx_wqe **wqe,
u16 *pi)
u16 pi)
{
struct mlx5_wq_cyc *wq = &sq->wq;
*pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
*wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
*wqe = mlx5_wq_cyc_get_wqe(wq, pi);
memset(*wqe, 0, sizeof(**wqe));
}

View File

@ -39,11 +39,6 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
return (u32)wq->fbc.sz_m1 + 1;
}
u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
{
return wq->fbc.frag_sz_m1 + 1;
}
u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
{
return wq->fbc.sz_m1 + 1;

View File

@ -80,7 +80,6 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *wqc, struct mlx5_wq_cyc *wq,
struct mlx5_wq_ctrl *wq_ctrl);
u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *qpc, struct mlx5_wq_qp *wq,
@ -140,11 +139,6 @@ static inline u16 mlx5_wq_cyc_ctr2ix(struct mlx5_wq_cyc *wq, u16 ctr)
return ctr & wq->fbc.sz_m1;
}
static inline u16 mlx5_wq_cyc_ctr2fragix(struct mlx5_wq_cyc *wq, u16 ctr)
{
return ctr & wq->fbc.frag_sz_m1;
}
static inline u16 mlx5_wq_cyc_get_head(struct mlx5_wq_cyc *wq)
{
return mlx5_wq_cyc_ctr2ix(wq, wq->wqe_ctr);
@ -160,6 +154,11 @@ static inline void *mlx5_wq_cyc_get_wqe(struct mlx5_wq_cyc *wq, u16 ix)
return mlx5_frag_buf_get_wqe(&wq->fbc, ix);
}
static inline u16 mlx5_wq_cyc_get_contig_wqebbs(struct mlx5_wq_cyc *wq, u16 ix)
{
return mlx5_frag_buf_get_idx_last_contig_stride(&wq->fbc, ix) - ix + 1;
}
static inline int mlx5_wq_cyc_cc_bigger(u16 cc1, u16 cc2)
{
int equal = (cc1 == cc2);

View File

@ -1055,6 +1055,7 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
err_driver_init:
mlxsw_thermal_fini(mlxsw_core->thermal);
err_thermal_init:
mlxsw_hwmon_fini(mlxsw_core->hwmon);
err_hwmon_init:
if (!reload)
devlink_unregister(devlink);
@ -1088,6 +1089,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
if (mlxsw_core->driver->fini)
mlxsw_core->driver->fini(mlxsw_core);
mlxsw_thermal_fini(mlxsw_core->thermal);
mlxsw_hwmon_fini(mlxsw_core->hwmon);
if (!reload)
devlink_unregister(devlink);
mlxsw_emad_fini(mlxsw_core);

View File

@ -359,6 +359,10 @@ static inline int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
return 0;
}
static inline void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon)
{
}
#endif
struct mlxsw_thermal;

View File

@ -303,8 +303,7 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
struct device *hwmon_dev;
int err;
mlxsw_hwmon = devm_kzalloc(mlxsw_bus_info->dev, sizeof(*mlxsw_hwmon),
GFP_KERNEL);
mlxsw_hwmon = kzalloc(sizeof(*mlxsw_hwmon), GFP_KERNEL);
if (!mlxsw_hwmon)
return -ENOMEM;
mlxsw_hwmon->core = mlxsw_core;
@ -321,10 +320,9 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
mlxsw_hwmon->groups[0] = &mlxsw_hwmon->group;
mlxsw_hwmon->group.attrs = mlxsw_hwmon->attrs;
hwmon_dev = devm_hwmon_device_register_with_groups(mlxsw_bus_info->dev,
"mlxsw",
mlxsw_hwmon,
mlxsw_hwmon->groups);
hwmon_dev = hwmon_device_register_with_groups(mlxsw_bus_info->dev,
"mlxsw", mlxsw_hwmon,
mlxsw_hwmon->groups);
if (IS_ERR(hwmon_dev)) {
err = PTR_ERR(hwmon_dev);
goto err_hwmon_register;
@ -337,5 +335,12 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
err_hwmon_register:
err_fans_init:
err_temp_init:
kfree(mlxsw_hwmon);
return err;
}
void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon)
{
hwmon_device_unregister(mlxsw_hwmon->hwmon_dev);
kfree(mlxsw_hwmon);
}

View File

@ -133,9 +133,9 @@ static inline int ocelot_vlant_wait_for_completion(struct ocelot *ocelot)
{
unsigned int val, timeout = 10;
/* Wait for the issued mac table command to be completed, or timeout.
* When the command read from ANA_TABLES_MACACCESS is
* MACACCESS_CMD_IDLE, the issued command completed successfully.
/* Wait for the issued vlan table command to be completed, or timeout.
* When the command read from ANA_TABLES_VLANACCESS is
* VLANACCESS_CMD_IDLE, the issued command completed successfully.
*/
do {
val = ocelot_read(ocelot, ANA_TABLES_VLANACCESS);

View File

@ -399,12 +399,14 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
switch (off) {
case offsetof(struct iphdr, daddr):
set_ip_addr->ipv4_dst_mask = mask;
set_ip_addr->ipv4_dst = exact;
set_ip_addr->ipv4_dst_mask |= mask;
set_ip_addr->ipv4_dst &= ~mask;
set_ip_addr->ipv4_dst |= exact & mask;
break;
case offsetof(struct iphdr, saddr):
set_ip_addr->ipv4_src_mask = mask;
set_ip_addr->ipv4_src = exact;
set_ip_addr->ipv4_src_mask |= mask;
set_ip_addr->ipv4_src &= ~mask;
set_ip_addr->ipv4_src |= exact & mask;
break;
default:
return -EOPNOTSUPP;
@ -418,11 +420,12 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
}
static void
nfp_fl_set_ip6_helper(int opcode_tag, int idx, __be32 exact, __be32 mask,
nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask,
struct nfp_fl_set_ipv6_addr *ip6)
{
ip6->ipv6[idx % 4].mask = mask;
ip6->ipv6[idx % 4].exact = exact;
ip6->ipv6[word].mask |= mask;
ip6->ipv6[word].exact &= ~mask;
ip6->ipv6[word].exact |= exact & mask;
ip6->reserved = cpu_to_be16(0);
ip6->head.jump_id = opcode_tag;
@ -435,6 +438,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
struct nfp_fl_set_ipv6_addr *ip_src)
{
__be32 exact, mask;
u8 word;
/* We are expecting tcf_pedit to return a big endian value */
mask = (__force __be32)~tcf_pedit_mask(action, idx);
@ -443,17 +447,20 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
if (exact & ~mask)
return -EOPNOTSUPP;
if (off < offsetof(struct ipv6hdr, saddr))
if (off < offsetof(struct ipv6hdr, saddr)) {
return -EOPNOTSUPP;
else if (off < offsetof(struct ipv6hdr, daddr))
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, idx,
} else if (off < offsetof(struct ipv6hdr, daddr)) {
word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact);
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word,
exact, mask, ip_src);
else if (off < offsetof(struct ipv6hdr, daddr) +
sizeof(struct in6_addr))
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, idx,
} else if (off < offsetof(struct ipv6hdr, daddr) +
sizeof(struct in6_addr)) {
word = (off - offsetof(struct ipv6hdr, daddr)) / sizeof(exact);
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word,
exact, mask, ip_dst);
else
} else {
return -EOPNOTSUPP;
}
return 0;
}
@ -511,7 +518,7 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
struct nfp_fl_set_eth set_eth;
enum pedit_header_type htype;
int idx, nkeys, err;
size_t act_size;
size_t act_size = 0;
u32 offset, cmd;
u8 ip_proto = 0;
@ -569,7 +576,9 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
act_size = sizeof(set_eth);
memcpy(nfp_action, &set_eth, act_size);
*a_len += act_size;
} else if (set_ip_addr.head.len_lw) {
}
if (set_ip_addr.head.len_lw) {
nfp_action += act_size;
act_size = sizeof(set_ip_addr);
memcpy(nfp_action, &set_ip_addr, act_size);
*a_len += act_size;
@ -577,10 +586,12 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
/* Hardware will automatically fix IPv4 and TCP/UDP checksum. */
*csum_updated |= TCA_CSUM_UPDATE_FLAG_IPV4HDR |
nfp_fl_csum_l4_to_flag(ip_proto);
} else if (set_ip6_dst.head.len_lw && set_ip6_src.head.len_lw) {
}
if (set_ip6_dst.head.len_lw && set_ip6_src.head.len_lw) {
/* TC compiles set src and dst IPv6 address as a single action,
* the hardware requires this to be 2 separate actions.
*/
nfp_action += act_size;
act_size = sizeof(set_ip6_src);
memcpy(nfp_action, &set_ip6_src, act_size);
*a_len += act_size;
@ -593,6 +604,7 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
/* Hardware will automatically fix TCP/UDP checksum. */
*csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto);
} else if (set_ip6_dst.head.len_lw) {
nfp_action += act_size;
act_size = sizeof(set_ip6_dst);
memcpy(nfp_action, &set_ip6_dst, act_size);
*a_len += act_size;
@ -600,13 +612,16 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
/* Hardware will automatically fix TCP/UDP checksum. */
*csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto);
} else if (set_ip6_src.head.len_lw) {
nfp_action += act_size;
act_size = sizeof(set_ip6_src);
memcpy(nfp_action, &set_ip6_src, act_size);
*a_len += act_size;
/* Hardware will automatically fix TCP/UDP checksum. */
*csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto);
} else if (set_tport.head.len_lw) {
}
if (set_tport.head.len_lw) {
nfp_action += act_size;
act_size = sizeof(set_tport);
memcpy(nfp_action, &set_tport, act_size);
*a_len += act_size;

View File

@ -228,7 +228,7 @@ static int qed_grc_attn_cb(struct qed_hwfn *p_hwfn)
attn_master_to_str(GET_FIELD(tmp, QED_GRC_ATTENTION_MASTER)),
GET_FIELD(tmp2, QED_GRC_ATTENTION_PF),
(GET_FIELD(tmp2, QED_GRC_ATTENTION_PRIV) ==
QED_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Ireelevant)",
QED_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant)",
GET_FIELD(tmp2, QED_GRC_ATTENTION_VF));
out:

View File

@ -380,8 +380,6 @@ static void fm93c56a_select(struct ql3_adapter *qdev)
qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1;
ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data);
ql_write_nvram_reg(qdev, spir,
((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data));
}
/*

View File

@ -6528,17 +6528,15 @@ static int rtl8169_poll(struct napi_struct *napi, int budget)
struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
struct net_device *dev = tp->dev;
u16 enable_mask = RTL_EVENT_NAPI | tp->event_slow;
int work_done= 0;
int work_done;
u16 status;
status = rtl_get_events(tp);
rtl_ack_events(tp, status & ~tp->event_slow);
if (status & RTL_EVENT_NAPI_RX)
work_done = rtl_rx(dev, tp, (u32) budget);
work_done = rtl_rx(dev, tp, (u32) budget);
if (status & RTL_EVENT_NAPI_TX)
rtl_tx(dev, tp);
rtl_tx(dev, tp);
if (status & tp->event_slow) {
enable_mask &= ~tp->event_slow;
@ -7071,20 +7069,12 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
{
unsigned int flags;
switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
RTL_W8(tp, Cfg9346, Cfg9346_Lock);
flags = PCI_IRQ_LEGACY;
break;
case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
/* This version was reported to have issues with resume
* from suspend when using MSI-X
*/
flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
break;
default:
} else {
flags = PCI_IRQ_ALL_TYPES;
}

View File

@ -831,12 +831,8 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
if (IS_ERR(rt))
return PTR_ERR(rt);
if (skb_dst(skb)) {
int mtu = dst_mtu(&rt->dst) - GENEVE_IPV4_HLEN -
info->options_len;
skb_dst_update_pmtu(skb, mtu);
}
skb_tunnel_check_pmtu(skb, &rt->dst,
GENEVE_IPV4_HLEN + info->options_len);
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->collect_md) {
@ -881,11 +877,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
if (IS_ERR(dst))
return PTR_ERR(dst);
if (skb_dst(skb)) {
int mtu = dst_mtu(dst) - GENEVE_IPV6_HLEN - info->options_len;
skb_dst_update_pmtu(skb, mtu);
}
skb_tunnel_check_pmtu(skb, dst, GENEVE_IPV6_HLEN + info->options_len);
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->collect_md) {

View File

@ -2267,8 +2267,9 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
/* Make sure no work handler is accessing the device */
flush_work(&vi->config_work);
netif_tx_lock_bh(vi->dev);
netif_device_detach(vi->dev);
netif_tx_disable(vi->dev);
netif_tx_unlock_bh(vi->dev);
cancel_delayed_work_sync(&vi->refill);
if (netif_running(vi->dev)) {
@ -2304,7 +2305,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
}
}
netif_tx_lock_bh(vi->dev);
netif_device_attach(vi->dev);
netif_tx_unlock_bh(vi->dev);
return err;
}

View File

@ -2262,11 +2262,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
}
ndst = &rt->dst;
if (skb_dst(skb)) {
int mtu = dst_mtu(ndst) - VXLAN_HEADROOM;
skb_dst_update_pmtu(skb, mtu);
}
skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
@ -2303,11 +2299,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto out_unlock;
}
if (skb_dst(skb)) {
int mtu = dst_mtu(ndst) - VXLAN6_HEADROOM;
skb_dst_update_pmtu(skb, mtu);
}
skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip6_dst_hoplimit(ndst);

View File

@ -485,7 +485,13 @@ static int armpmu_filter_match(struct perf_event *event)
{
struct arm_pmu *armpmu = to_arm_pmu(event->pmu);
unsigned int cpu = smp_processor_id();
return cpumask_test_cpu(cpu, &armpmu->supported_cpus);
int ret;
ret = cpumask_test_cpu(cpu, &armpmu->supported_cpus);
if (ret && armpmu->filter_match)
return armpmu->filter_match(event);
return ret;
}
static ssize_t armpmu_cpumask_show(struct device *dev,

View File

@ -24,6 +24,8 @@
#include <linux/slab.h>
#include <linux/timekeeping.h>
#include <linux/nospec.h>
#include "ptp_private.h"
static int ptp_disable_pinfunc(struct ptp_clock_info *ops,
@ -248,6 +250,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
err = -EINVAL;
break;
}
pin_index = array_index_nospec(pin_index, ops->n_pins);
if (mutex_lock_interruptible(&ptp->pincfg_mux))
return -ERESTARTSYS;
pd = ops->pin_config[pin_index];
@ -266,6 +269,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
err = -EINVAL;
break;
}
pin_index = array_index_nospec(pin_index, ops->n_pins);
if (mutex_lock_interruptible(&ptp->pincfg_mux))
return -ERESTARTSYS;
err = ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan);

View File

@ -690,8 +690,6 @@ static void afs_process_async_call(struct work_struct *work)
}
if (call->state == AFS_CALL_COMPLETE) {
call->reply[0] = NULL;
/* We have two refs to release - one from the alloc and one
* queued with the work item - and we can't just deallocate the
* call because the work item may be queued again.

View File

@ -199,11 +199,9 @@ static struct afs_server *afs_install_server(struct afs_net *net,
write_sequnlock(&net->fs_addr_lock);
ret = 0;
goto out;
exists:
afs_get_server(server);
out:
write_sequnlock(&net->fs_lock);
return server;
}

View File

@ -343,7 +343,7 @@ try_again:
trap = lock_rename(cache->graveyard, dir);
/* do some checks before getting the grave dentry */
if (rep->d_parent != dir) {
if (rep->d_parent != dir || IS_DEADDIR(d_inode(rep))) {
/* the entry was probably culled when we dropped the parent dir
* lock */
unlock_rename(cache->graveyard, dir);

View File

@ -666,6 +666,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
min(end - index, (pgoff_t)PAGEVEC_SIZE),
indices)) {
pgoff_t nr_pages = 1;
for (i = 0; i < pagevec_count(&pvec); i++) {
struct page *pvec_ent = pvec.pages[i];
void *entry;
@ -680,8 +682,15 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
xa_lock_irq(&mapping->i_pages);
entry = get_unlocked_mapping_entry(mapping, index, NULL);
if (entry)
if (entry) {
page = dax_busy_page(entry);
/*
* Account for multi-order entries at
* the end of the pagevec.
*/
if (i + 1 >= pagevec_count(&pvec))
nr_pages = 1UL << dax_radix_order(entry);
}
put_unlocked_mapping_entry(mapping, index, entry);
xa_unlock_irq(&mapping->i_pages);
if (page)
@ -696,7 +705,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
*/
pagevec_remove_exceptionals(&pvec);
pagevec_release(&pvec);
index++;
index += nr_pages;
if (page)
break;

View File

@ -682,6 +682,7 @@ int fat_count_free_clusters(struct super_block *sb)
if (ops->ent_get(&fatent) == FAT_ENT_FREE)
free++;
} while (fat_ent_next(sbi, &fatent));
cond_resched();
}
sbi->free_clusters = free;
sbi->free_clus_valid = 1;

View File

@ -70,20 +70,7 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
}
/*
* initialise an cookie jar slab element prior to any use
*/
void fscache_cookie_init_once(void *_cookie)
{
struct fscache_cookie *cookie = _cookie;
memset(cookie, 0, sizeof(*cookie));
spin_lock_init(&cookie->lock);
spin_lock_init(&cookie->stores_lock);
INIT_HLIST_HEAD(&cookie->backing_objects);
}
/*
* Set the index key in a cookie. The cookie struct has space for a 12-byte
* Set the index key in a cookie. The cookie struct has space for a 16-byte
* key plus length and hash, but if that's not big enough, it's instead a
* pointer to a buffer containing 3 bytes of hash, 1 byte of length and then
* the key data.
@ -93,20 +80,18 @@ static int fscache_set_key(struct fscache_cookie *cookie,
{
unsigned long long h;
u32 *buf;
int bufs;
int i;
cookie->key_len = index_key_len;
bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
if (index_key_len > sizeof(cookie->inline_key)) {
buf = kzalloc(index_key_len, GFP_KERNEL);
buf = kcalloc(bufs, sizeof(*buf), GFP_KERNEL);
if (!buf)
return -ENOMEM;
cookie->key = buf;
} else {
buf = (u32 *)cookie->inline_key;
buf[0] = 0;
buf[1] = 0;
buf[2] = 0;
}
memcpy(buf, index_key, index_key_len);
@ -116,7 +101,8 @@ static int fscache_set_key(struct fscache_cookie *cookie,
*/
h = (unsigned long)cookie->parent;
h += index_key_len + cookie->type;
for (i = 0; i < (index_key_len + sizeof(u32) - 1) / sizeof(u32); i++)
for (i = 0; i < bufs; i++)
h += buf[i];
cookie->key_hash = h ^ (h >> 32);
@ -161,7 +147,7 @@ struct fscache_cookie *fscache_alloc_cookie(
struct fscache_cookie *cookie;
/* allocate and initialise a cookie */
cookie = kmem_cache_alloc(fscache_cookie_jar, GFP_KERNEL);
cookie = kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL);
if (!cookie)
return NULL;
@ -192,6 +178,9 @@ struct fscache_cookie *fscache_alloc_cookie(
cookie->netfs_data = netfs_data;
cookie->flags = (1 << FSCACHE_COOKIE_NO_DATA_YET);
cookie->type = def->type;
spin_lock_init(&cookie->lock);
spin_lock_init(&cookie->stores_lock);
INIT_HLIST_HEAD(&cookie->backing_objects);
/* radix tree insertion won't use the preallocation pool unless it's
* told it may not wait */

View File

@ -51,7 +51,6 @@ extern struct fscache_cache *fscache_select_cache_for_object(
extern struct kmem_cache *fscache_cookie_jar;
extern void fscache_free_cookie(struct fscache_cookie *);
extern void fscache_cookie_init_once(void *);
extern struct fscache_cookie *fscache_alloc_cookie(struct fscache_cookie *,
const struct fscache_cookie_def *,
const void *, size_t,

View File

@ -143,9 +143,7 @@ static int __init fscache_init(void)
fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar",
sizeof(struct fscache_cookie),
0,
0,
fscache_cookie_init_once);
0, 0, NULL);
if (!fscache_cookie_jar) {
pr_notice("Failed to allocate a cookie jar\n");
ret = -ENOMEM;

View File

@ -975,10 +975,6 @@ static void gfs2_iomap_journaled_page_done(struct inode *inode, loff_t pos,
{
struct gfs2_inode *ip = GFS2_I(inode);
if (!page_has_buffers(page)) {
create_empty_buffers(page, inode->i_sb->s_blocksize,
(1 << BH_Dirty)|(1 << BH_Uptodate));
}
gfs2_page_add_databufs(ip, page, offset_in_page(pos), copied);
}
@ -1061,7 +1057,7 @@ static int gfs2_iomap_begin_write(struct inode *inode, loff_t pos,
}
}
release_metapath(&mp);
if (gfs2_is_jdata(ip))
if (!gfs2_is_stuffed(ip) && gfs2_is_jdata(ip))
iomap->page_done = gfs2_iomap_journaled_page_done;
return 0;

View File

@ -96,7 +96,9 @@ struct ocfs2_unblock_ctl {
};
/* Lockdep class keys */
#ifdef CONFIG_DEBUG_LOCK_ALLOC
static struct lock_class_key lockdep_keys[OCFS2_NUM_LOCK_TYPES];
#endif
static int ocfs2_check_meta_downconvert(struct ocfs2_lock_res *lockres,
int new_level);

View File

@ -2337,8 +2337,8 @@ late_initcall(ubifs_init);
static void __exit ubifs_exit(void)
{
WARN_ON(list_empty(&ubifs_infos));
WARN_ON(atomic_long_read(&ubifs_clean_zn_cnt) == 0);
WARN_ON(!list_empty(&ubifs_infos));
WARN_ON(atomic_long_read(&ubifs_clean_zn_cnt) != 0);
dbg_debugfs_exit();
ubifs_compressors_exit();

View File

@ -214,9 +214,9 @@ struct detailed_timing {
#define DRM_EDID_HDMI_DC_Y444 (1 << 3)
/* YCBCR 420 deep color modes */
#define DRM_EDID_YCBCR420_DC_48 (1 << 6)
#define DRM_EDID_YCBCR420_DC_36 (1 << 5)
#define DRM_EDID_YCBCR420_DC_30 (1 << 4)
#define DRM_EDID_YCBCR420_DC_48 (1 << 2)
#define DRM_EDID_YCBCR420_DC_36 (1 << 1)
#define DRM_EDID_YCBCR420_DC_30 (1 << 0)
#define DRM_EDID_YCBCR420_DC_MASK (DRM_EDID_YCBCR420_DC_48 | \
DRM_EDID_YCBCR420_DC_36 | \
DRM_EDID_YCBCR420_DC_30)

View File

@ -43,7 +43,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned char *vec);
extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
unsigned long new_addr, unsigned long old_end,
pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush);
pmd_t *old_pmd, pmd_t *new_pmd);
extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long addr, pgprot_t newprot,
int prot_numa);

View File

@ -1027,6 +1027,14 @@ static inline void *mlx5_frag_buf_get_wqe(struct mlx5_frag_buf_ctrl *fbc,
return fbc->frags[frag].buf + ((fbc->frag_sz_m1 & ix) << fbc->log_stride);
}
static inline u32
mlx5_frag_buf_get_idx_last_contig_stride(struct mlx5_frag_buf_ctrl *fbc, u32 ix)
{
u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1;
return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1);
}
int mlx5_cmd_init(struct mlx5_core_dev *dev);
void mlx5_cmd_cleanup(struct mlx5_core_dev *dev);
void mlx5_cmd_use_events(struct mlx5_core_dev *dev);

View File

@ -20,6 +20,7 @@
#include <linux/export.h>
#include <linux/rbtree_latch.h>
#include <linux/error-injection.h>
#include <linux/tracepoint-defs.h>
#include <linux/percpu.h>
#include <asm/module.h>
@ -430,7 +431,7 @@ struct module {
#ifdef CONFIG_TRACEPOINTS
unsigned int num_tracepoints;
struct tracepoint * const *tracepoints_ptrs;
tracepoint_ptr_t *tracepoints_ptrs;
#endif
#ifdef HAVE_JUMP_LABEL
struct jump_entry *jump_entries;

View File

@ -99,6 +99,7 @@ struct arm_pmu {
void (*stop)(struct arm_pmu *);
void (*reset)(void *);
int (*map_event)(struct perf_event *event);
int (*filter_match)(struct perf_event *event);
int num_events;
bool secure_access; /* 32-bit ARM only */
#define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40

View File

@ -35,6 +35,12 @@ struct tracepoint {
struct tracepoint_func __rcu *funcs;
};
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
typedef const int tracepoint_ptr_t;
#else
typedef struct tracepoint * const tracepoint_ptr_t;
#endif
struct bpf_raw_event_map {
struct tracepoint *tp;
void *bpf_func;

View File

@ -99,6 +99,29 @@ extern void syscall_unregfunc(void);
#define TRACE_DEFINE_ENUM(x)
#define TRACE_DEFINE_SIZEOF(x)
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
{
return offset_to_ptr(p);
}
#define __TRACEPOINT_ENTRY(name) \
asm(" .section \"__tracepoints_ptrs\", \"a\" \n" \
" .balign 4 \n" \
" .long __tracepoint_" #name " - . \n" \
" .previous \n")
#else
static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
{
return *p;
}
#define __TRACEPOINT_ENTRY(name) \
static tracepoint_ptr_t __tracepoint_ptr_##name __used \
__attribute__((section("__tracepoints_ptrs"))) = \
&__tracepoint_##name
#endif
#endif /* _LINUX_TRACEPOINT_H */
/*
@ -253,19 +276,6 @@ extern void syscall_unregfunc(void);
return static_key_false(&__tracepoint_##name.key); \
}
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
#define __TRACEPOINT_ENTRY(name) \
asm(" .section \"__tracepoints_ptrs\", \"a\" \n" \
" .balign 4 \n" \
" .long __tracepoint_" #name " - . \n" \
" .previous \n")
#else
#define __TRACEPOINT_ENTRY(name) \
static struct tracepoint * const __tracepoint_ptr_##name __used \
__attribute__((section("__tracepoints_ptrs"))) = \
&__tracepoint_##name
#endif
/*
* We have no guarantee that gcc and the linker won't up-align the tracepoint
* structures, so we create an array of pointers that will be used for iteration

View File

@ -527,4 +527,14 @@ static inline void skb_dst_update_pmtu(struct sk_buff *skb, u32 mtu)
dst->ops->update_pmtu(dst, NULL, skb, mtu);
}
static inline void skb_tunnel_check_pmtu(struct sk_buff *skb,
struct dst_entry *encap_dst,
int headroom)
{
u32 encap_mtu = dst_mtu(encap_dst);
if (skb->len > encap_mtu - headroom)
skb_dst_update_pmtu(skb, encap_mtu - headroom);
}
#endif /* _NET_DST_H */

View File

@ -159,6 +159,10 @@ struct fib6_info {
struct rt6_info * __percpu *rt6i_pcpu;
struct rt6_exception_bucket __rcu *rt6i_exception_bucket;
#ifdef CONFIG_IPV6_ROUTER_PREF
unsigned long last_probe;
#endif
u32 fib6_metric;
u8 fib6_protocol;
u8 fib6_type;

View File

@ -347,7 +347,7 @@ static inline __u16 sctp_data_size(struct sctp_chunk *chunk)
__u16 size;
size = ntohs(chunk->chunk_hdr->length);
size -= sctp_datahdr_len(&chunk->asoc->stream);
size -= sctp_datachk_len(&chunk->asoc->stream);
return size;
}

View File

@ -876,6 +876,8 @@ struct sctp_transport {
unsigned long sackdelay;
__u32 sackfreq;
atomic_t mtu_info;
/* When was the last time that we heard from this transport? We use
* this to pick new active and retran paths.
*/

View File

@ -301,6 +301,7 @@ enum sctp_sinfo_flags {
SCTP_SACK_IMMEDIATELY = (1 << 3), /* SACK should be sent without delay. */
/* 2 bits here have been used by SCTP_PR_SCTP_MASK */
SCTP_SENDALL = (1 << 6),
SCTP_PR_SCTP_ALL = (1 << 7),
SCTP_NOTIFICATION = MSG_NOTIFICATION, /* Next message is not user msg but notification. */
SCTP_EOF = MSG_FIN, /* Initiate graceful shutdown process. */
};

View File

@ -192,11 +192,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
sock_hold(sock->sk);
old_xs = xchg(&m->xsk_map[i], xs);
if (old_xs) {
/* Make sure we've flushed everything. */
synchronize_net();
if (old_xs)
sock_put((struct sock *)old_xs);
}
sockfd_put(sock);
return 0;
@ -212,11 +209,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
return -EINVAL;
old_xs = xchg(&m->xsk_map[k], NULL);
if (old_xs) {
/* Make sure we've flushed everything. */
synchronize_net();
if (old_xs)
sock_put((struct sock *)old_xs);
}
return 0;
}

View File

@ -5,12 +5,12 @@
* Copyright (C) 2018 Joel Fernandes (Google) <joel@joelfernandes.org>
*/
#include <linux/trace_clock.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/ktime.h>
#include <linux/module.h>
#include <linux/printk.h>
#include <linux/string.h>
@ -25,13 +25,13 @@ MODULE_PARM_DESC(test_mode, "Mode of the test such as preempt or irq (default ir
static void busy_wait(ulong time)
{
ktime_t start, end;
start = ktime_get();
u64 start, end;
start = trace_clock_local();
do {
end = ktime_get();
end = trace_clock_local();
if (kthread_should_stop())
break;
} while (ktime_to_ns(ktime_sub(end, start)) < (time * 1000));
} while ((end - start) < (time * 1000));
}
static int preemptirq_delay_run(void *data)

View File

@ -28,8 +28,8 @@
#include <linux/sched/task.h>
#include <linux/static_key.h>
extern struct tracepoint * const __start___tracepoints_ptrs[];
extern struct tracepoint * const __stop___tracepoints_ptrs[];
extern tracepoint_ptr_t __start___tracepoints_ptrs[];
extern tracepoint_ptr_t __stop___tracepoints_ptrs[];
DEFINE_SRCU(tracepoint_srcu);
EXPORT_SYMBOL_GPL(tracepoint_srcu);
@ -371,25 +371,17 @@ int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data)
}
EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
static void for_each_tracepoint_range(struct tracepoint * const *begin,
struct tracepoint * const *end,
static void for_each_tracepoint_range(
tracepoint_ptr_t *begin, tracepoint_ptr_t *end,
void (*fct)(struct tracepoint *tp, void *priv),
void *priv)
{
tracepoint_ptr_t *iter;
if (!begin)
return;
if (IS_ENABLED(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)) {
const int *iter;
for (iter = (const int *)begin; iter < (const int *)end; iter++)
fct(offset_to_ptr(iter), priv);
} else {
struct tracepoint * const *iter;
for (iter = begin; iter < end; iter++)
fct(*iter, priv);
}
for (iter = begin; iter < end; iter++)
fct(tracepoint_ptr_deref(iter), priv);
}
#ifdef CONFIG_MODULES

View File

@ -150,10 +150,10 @@ static void ida_check_conv(struct ida *ida)
IDA_BUG_ON(ida, !ida_is_empty(ida));
}
static DEFINE_IDA(ida);
static int ida_checks(void)
{
DEFINE_IDA(ida);
IDA_BUG_ON(&ida, !ida_is_empty(&ida));
ida_check_alloc(&ida);
ida_check_destroy(&ida);

View File

@ -1780,7 +1780,7 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd)
bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
unsigned long new_addr, unsigned long old_end,
pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
pmd_t *old_pmd, pmd_t *new_pmd)
{
spinlock_t *old_ptl, *new_ptl;
pmd_t pmd;
@ -1811,7 +1811,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
if (new_ptl != old_ptl)
spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
if (pmd_present(pmd) && pmd_dirty(pmd))
if (pmd_present(pmd))
force_flush = true;
VM_BUG_ON(!pmd_none(*new_pmd));
@ -1822,12 +1822,10 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
}
pmd = move_soft_dirty_pmd(pmd);
set_pmd_at(mm, new_addr, new_pmd, pmd);
if (new_ptl != old_ptl)
spin_unlock(new_ptl);
if (force_flush)
flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
else
*need_flush = true;
if (new_ptl != old_ptl)
spin_unlock(new_ptl);
spin_unlock(old_ptl);
return true;
}
@ -2885,9 +2883,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
if (!(pvmw->pmd && !pvmw->pte))
return;
mmu_notifier_invalidate_range_start(mm, address,
address + HPAGE_PMD_SIZE);
flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
pmdval = *pvmw->pmd;
pmdp_invalidate(vma, address, pvmw->pmd);
@ -2900,9 +2895,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
set_pmd_at(mm, address, pvmw->pmd, pmdswp);
page_remove_rmap(page, true);
put_page(page);
mmu_notifier_invalidate_range_end(mm, address,
address + HPAGE_PMD_SIZE);
}
void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)

View File

@ -1410,7 +1410,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
if (flags & MAP_FIXED_NOREPLACE) {
struct vm_area_struct *vma = find_vma(mm, addr);
if (vma && vma->vm_start <= addr)
if (vma && vma->vm_start < addr + len)
return -EEXIST;
}

View File

@ -115,7 +115,7 @@ static pte_t move_soft_dirty_pte(pte_t pte)
static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
unsigned long old_addr, unsigned long old_end,
struct vm_area_struct *new_vma, pmd_t *new_pmd,
unsigned long new_addr, bool need_rmap_locks, bool *need_flush)
unsigned long new_addr, bool need_rmap_locks)
{
struct mm_struct *mm = vma->vm_mm;
pte_t *old_pte, *new_pte, pte;
@ -163,15 +163,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
pte = ptep_get_and_clear(mm, old_addr, old_pte);
/*
* If we are remapping a dirty PTE, make sure
* If we are remapping a valid PTE, make sure
* to flush TLB before we drop the PTL for the
* old PTE or we may race with page_mkclean().
* PTE.
*
* This check has to be done after we removed the
* old PTE from page tables or another thread may
* dirty it after the check and before the removal.
* NOTE! Both old and new PTL matter: the old one
* for racing with page_mkclean(), the new one to
* make sure the physical page stays valid until
* the TLB entry for the old mapping has been
* flushed.
*/
if (pte_present(pte) && pte_dirty(pte))
if (pte_present(pte))
force_flush = true;
pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr);
pte = move_soft_dirty_pte(pte);
@ -179,13 +181,11 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
}
arch_leave_lazy_mmu_mode();
if (force_flush)
flush_tlb_range(vma, old_end - len, old_end);
if (new_ptl != old_ptl)
spin_unlock(new_ptl);
pte_unmap(new_pte - 1);
if (force_flush)
flush_tlb_range(vma, old_end - len, old_end);
else
*need_flush = true;
pte_unmap_unlock(old_pte - 1, old_ptl);
if (need_rmap_locks)
drop_rmap_locks(vma);
@ -198,7 +198,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
{
unsigned long extent, next, old_end;
pmd_t *old_pmd, *new_pmd;
bool need_flush = false;
unsigned long mmun_start; /* For mmu_notifiers */
unsigned long mmun_end; /* For mmu_notifiers */
@ -229,8 +228,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
if (need_rmap_locks)
take_rmap_locks(vma);
moved = move_huge_pmd(vma, old_addr, new_addr,
old_end, old_pmd, new_pmd,
&need_flush);
old_end, old_pmd, new_pmd);
if (need_rmap_locks)
drop_rmap_locks(vma);
if (moved)
@ -246,10 +244,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
if (extent > next - new_addr)
extent = next - new_addr;
move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,
new_pmd, new_addr, need_rmap_locks, &need_flush);
new_pmd, new_addr, need_rmap_locks);
}
if (need_flush)
flush_tlb_range(vma, old_end-len, old_addr);
mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);

View File

@ -23,9 +23,11 @@ static void shutdown_umh(struct umh_info *info)
if (!info->pid)
return;
tsk = pid_task(find_vpid(info->pid), PIDTYPE_PID);
if (tsk)
tsk = get_pid_task(find_vpid(info->pid), PIDTYPE_PID);
if (tsk) {
force_sig(SIGKILL, tsk);
put_task_struct(tsk);
}
fput(info->pipe_to_umh);
fput(info->pipe_from_umh);
info->pid = 0;

View File

@ -928,6 +928,9 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
return -EINVAL;
}
if (info.cmd != cmd)
return -EINVAL;
if (info.cmd == ETHTOOL_GRXCLSRLALL) {
if (info.rule_cnt > 0) {
if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32))
@ -2392,13 +2395,17 @@ roll_back:
return ret;
}
static int ethtool_set_per_queue(struct net_device *dev, void __user *useraddr)
static int ethtool_set_per_queue(struct net_device *dev,
void __user *useraddr, u32 sub_cmd)
{
struct ethtool_per_queue_op per_queue_opt;
if (copy_from_user(&per_queue_opt, useraddr, sizeof(per_queue_opt)))
return -EFAULT;
if (per_queue_opt.sub_command != sub_cmd)
return -EINVAL;
switch (per_queue_opt.sub_command) {
case ETHTOOL_GCOALESCE:
return ethtool_get_per_queue_coalesce(dev, useraddr, &per_queue_opt);
@ -2769,7 +2776,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
rc = ethtool_get_phy_stats(dev, useraddr);
break;
case ETHTOOL_PERQUEUE:
rc = ethtool_set_per_queue(dev, useraddr);
rc = ethtool_set_per_queue(dev, useraddr, sub_cmd);
break;
case ETHTOOL_GLINKSETTINGS:
rc = ethtool_get_link_ksettings(dev, useraddr);

View File

@ -312,7 +312,6 @@ void netpoll_send_skb_on_dev(struct netpoll *np, struct sk_buff *skb,
/* It is up to the caller to keep npinfo alive. */
struct netpoll_info *npinfo;
rcu_read_lock_bh();
lockdep_assert_irqs_disabled();
npinfo = rcu_dereference_bh(np->dev->npinfo);
@ -357,7 +356,6 @@ void netpoll_send_skb_on_dev(struct netpoll *np, struct sk_buff *skb,
skb_queue_tail(&npinfo->txq, skb);
schedule_delayed_work(&npinfo->tx_work,0);
}
rcu_read_unlock_bh();
}
EXPORT_SYMBOL(netpoll_send_skb_on_dev);

View File

@ -315,8 +315,6 @@ int mr_table_dump(struct mr_table *mrt, struct sk_buff *skb,
next_entry:
e++;
}
e = 0;
s_e = 0;
spin_lock_bh(lock);
list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {

View File

@ -1184,11 +1184,6 @@ route_lookup:
}
skb_dst_set(skb, dst);
if (encap_limit >= 0) {
init_tel_txopt(&opt, encap_limit);
ipv6_push_frag_opts(skb, &opt.ops, &proto);
}
if (hop_limit == 0) {
if (skb->protocol == htons(ETH_P_IP))
hop_limit = ip_hdr(skb)->ttl;
@ -1210,6 +1205,11 @@ route_lookup:
if (err)
return err;
if (encap_limit >= 0) {
init_tel_txopt(&opt, encap_limit);
ipv6_push_frag_opts(skb, &opt.ops, &proto);
}
skb_push(skb, sizeof(struct ipv6hdr));
skb_reset_network_header(skb);
ipv6h = ipv6_hdr(skb);

View File

@ -2436,17 +2436,17 @@ static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml,
{
int err;
/* callers have the socket lock and rtnl lock
* so no other readers or writers of iml or its sflist
*/
write_lock_bh(&iml->sflock);
if (!iml->sflist) {
/* any-source empty exclude case */
return ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
} else {
err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
iml->sflist->sl_count, iml->sflist->sl_addr, 0);
sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
iml->sflist = NULL;
}
err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
iml->sflist->sl_count, iml->sflist->sl_addr, 0);
sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
iml->sflist = NULL;
write_unlock_bh(&iml->sflock);
return err;
}

View File

@ -517,10 +517,11 @@ static void rt6_probe_deferred(struct work_struct *w)
static void rt6_probe(struct fib6_info *rt)
{
struct __rt6_probe_work *work;
struct __rt6_probe_work *work = NULL;
const struct in6_addr *nh_gw;
struct neighbour *neigh;
struct net_device *dev;
struct inet6_dev *idev;
/*
* Okay, this does not seem to be appropriate
@ -536,15 +537,12 @@ static void rt6_probe(struct fib6_info *rt)
nh_gw = &rt->fib6_nh.nh_gw;
dev = rt->fib6_nh.nh_dev;
rcu_read_lock_bh();
idev = __in6_dev_get(dev);
neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
if (neigh) {
struct inet6_dev *idev;
if (neigh->nud_state & NUD_VALID)
goto out;
idev = __in6_dev_get(dev);
work = NULL;
write_lock(&neigh->lock);
if (!(neigh->nud_state & NUD_VALID) &&
time_after(jiffies,
@ -554,11 +552,13 @@ static void rt6_probe(struct fib6_info *rt)
__neigh_set_probe_once(neigh);
}
write_unlock(&neigh->lock);
} else {
} else if (time_after(jiffies, rt->last_probe +
idev->cnf.rtr_probe_interval)) {
work = kmalloc(sizeof(*work), GFP_ATOMIC);
}
if (work) {
rt->last_probe = jiffies;
INIT_WORK(&work->work, rt6_probe_deferred);
work->target = *nh_gw;
dev_hold(dev);

View File

@ -766,11 +766,9 @@ static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
ret = udpv6_queue_rcv_skb(sk, skb);
/* a return value > 0 means to resubmit the input, but
* it wants the return to be -protocol, or 0
*/
/* a return value > 0 means to resubmit the input */
if (ret > 0)
return -ret;
return ret;
return 0;
}

View File

@ -146,8 +146,8 @@ _decode_session6(struct sk_buff *skb, struct flowi *fl, int reverse)
fl6->daddr = reverse ? hdr->saddr : hdr->daddr;
fl6->saddr = reverse ? hdr->daddr : hdr->saddr;
while (nh + offset + 1 < skb->data ||
pskb_may_pull(skb, nh + offset + 1 - skb->data)) {
while (nh + offset + sizeof(*exthdr) < skb->data ||
pskb_may_pull(skb, nh + offset + sizeof(*exthdr) - skb->data)) {
nh = skb_network_header(skb);
exthdr = (struct ipv6_opt_hdr *)(nh + offset);

View File

@ -734,6 +734,7 @@ void llc_sap_add_socket(struct llc_sap *sap, struct sock *sk)
llc_sk(sk)->sap = sap;
spin_lock_bh(&sap->sk_lock);
sock_set_flag(sk, SOCK_RCU_FREE);
sap->sk_count++;
sk_nulls_add_node_rcu(sk, laddr_hb);
hlist_add_head(&llc->dev_hash_node, dev_hb);

View File

@ -337,7 +337,7 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local,
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_connection *conn;
struct rxrpc_peer *peer;
struct rxrpc_peer *peer = NULL;
struct rxrpc_call *call;
_enter("");

View File

@ -139,7 +139,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
udp_sk(usk)->gro_complete = NULL;
udp_encap_enable();
#if IS_ENABLED(CONFIG_IPV6)
#if IS_ENABLED(CONFIG_AF_RXRPC_IPV6)
if (local->srx.transport.family == AF_INET6)
udpv6_encap_enable();
#endif

View File

@ -572,7 +572,8 @@ void rxrpc_reject_packets(struct rxrpc_local *local)
whdr.flags ^= RXRPC_CLIENT_INITIATED;
whdr.flags &= RXRPC_CLIENT_INITIATED;
ret = kernel_sendmsg(local->socket, &msg, iov, 2, size);
ret = kernel_sendmsg(local->socket, &msg,
iov, ioc, size);
if (ret < 0)
trace_rxrpc_tx_fail(local->debug_id, 0, ret,
rxrpc_tx_point_reject);

View File

@ -197,6 +197,7 @@ void rxrpc_error_report(struct sock *sk)
rxrpc_store_error(peer, serr);
rcu_read_unlock();
rxrpc_free_skb(skb, rxrpc_skb_rx_freed);
rxrpc_put_peer(peer);
_leave("");
}

Some files were not shown because too many files have changed in this diff Show More