Merge /spare/repo/linux-2.6/

This commit is contained in:
Jeff Garzik 2005-09-22 15:43:14 -04:00
commit da192bb50c
26 changed files with 260 additions and 228 deletions

View File

@ -1,5 +1,6 @@
Revised: 2000-Dec-05.
Again: 2002-Jul-06
Again: 2005-Sep-19
NOTE:
@ -18,8 +19,8 @@ called USB Request Block, or URB for short.
and deliver the data and status back.
- Execution of an URB is inherently an asynchronous operation, i.e. the
usb_submit_urb(urb) call returns immediately after it has successfully queued
the requested action.
usb_submit_urb(urb) call returns immediately after it has successfully
queued the requested action.
- Transfers for one URB can be canceled with usb_unlink_urb(urb) at any time.
@ -94,8 +95,9 @@ To free an URB, use
void usb_free_urb(struct urb *urb)
You may not free an urb that you've submitted, but which hasn't yet been
returned to you in a completion callback.
You may free an urb that you've submitted, but which hasn't yet been
returned to you in a completion callback. It will automatically be
deallocated when it is no longer in use.
1.4. What has to be filled in?
@ -145,30 +147,36 @@ to get seamless ISO streaming.
1.6. How to cancel an already running URB?
For an URB which you've submitted, but which hasn't been returned to
your driver by the host controller, call
There are two ways to cancel an URB you've submitted but which hasn't
been returned to your driver yet. For an asynchronous cancel, call
int usb_unlink_urb(struct urb *urb)
It removes the urb from the internal list and frees all allocated
HW descriptors. The status is changed to reflect unlinking. After
usb_unlink_urb() returns with that status code, you can free the URB
with usb_free_urb().
HW descriptors. The status is changed to reflect unlinking. Note
that the URB will not normally have finished when usb_unlink_urb()
returns; you must still wait for the completion handler to be called.
There is also an asynchronous unlink mode. To use this, set the
the URB_ASYNC_UNLINK flag in urb->transfer flags before calling
usb_unlink_urb(). When using async unlinking, the URB will not
normally be unlinked when usb_unlink_urb() returns. Instead, wait
for the completion handler to be called.
To cancel an URB synchronously, call
void usb_kill_urb(struct urb *urb)
It does everything usb_unlink_urb does, and in addition it waits
until after the URB has been returned and the completion handler
has finished. It also marks the URB as temporarily unusable, so
that if the completion handler or anyone else tries to resubmit it
they will get a -EPERM error. Thus you can be sure that when
usb_kill_urb() returns, the URB is totally idle.
1.7. What about the completion handler?
The handler is of the following type:
typedef void (*usb_complete_t)(struct urb *);
typedef void (*usb_complete_t)(struct urb *, struct pt_regs *)
i.e. it gets just the URB that caused the completion call.
I.e., it gets the URB that caused the completion call, plus the
register values at the time of the corresponding interrupt (if any).
In the completion handler, you should have a look at urb->status to
detect any USB errors. Since the context parameter is included in the URB,
you can pass information to the completion handler.
@ -176,17 +184,11 @@ you can pass information to the completion handler.
Note that even when an error (or unlink) is reported, data may have been
transferred. That's because USB transfers are packetized; it might take
sixteen packets to transfer your 1KByte buffer, and ten of them might
have transferred succesfully before the completion is called.
have transferred succesfully before the completion was called.
NOTE: ***** WARNING *****
Don't use urb->dev field in your completion handler; it's cleared
as part of giving urbs back to drivers. (Addressing an issue with
ownership of periodic URBs, which was otherwise ambiguous.) Instead,
use urb->context to hold all the data your driver needs.
NOTE: ***** WARNING *****
Also, NEVER SLEEP IN A COMPLETION HANDLER. These are normally called
NEVER SLEEP IN A COMPLETION HANDLER. These are normally called
during hardware interrupt processing. If you can, defer substantial
work to a tasklet (bottom half) to keep system latencies low. You'll
probably need to use spinlocks to protect data structures you manipulate
@ -229,24 +231,10 @@ ISO data with some other event stream.
Interrupt transfers, like isochronous transfers, are periodic, and happen
in intervals that are powers of two (1, 2, 4 etc) units. Units are frames
for full and low speed devices, and microframes for high speed ones.
Currently, after you submit one interrupt URB, that urb is owned by the
host controller driver until you cancel it with usb_unlink_urb(). You
may unlink interrupt urbs in their completion handlers, if you need to.
After a transfer completion is called, the URB is automagically resubmitted.
THIS BEHAVIOR IS EXPECTED TO BE REMOVED!!
Interrupt transfers may only send (or receive) the "maxpacket" value for
the given interrupt endpoint; if you need more data, you will need to
copy that data out of (or into) another buffer. Similarly, you can't
queue interrupt transfers.
THESE RESTRICTIONS ARE EXPECTED TO BE REMOVED!!
Note that this automagic resubmission model does make it awkward to use
interrupt OUT transfers. The portable solution involves unlinking those
OUT urbs after the data is transferred, and perhaps submitting a final
URB for a short packet.
The usb_submit_urb() call modifies urb->interval to the implemented interval
value that is less than or equal to the requested interval value.
In Linux 2.6, unlike earlier versions, interrupt URBs are not automagically
restarted when they complete. They end when the completion handler is
called, just like other URBs. If you want an interrupt URB to be restarted,
your completion handler must resubmit it.

View File

@ -1063,8 +1063,6 @@ M: wli@holomorphy.com
S: Maintained
I2C SUBSYSTEM
P: Greg Kroah-Hartman
M: greg@kroah.com
P: Jean Delvare
M: khali@linux-fr.org
L: lm-sensors@lm-sensors.org

View File

@ -184,8 +184,8 @@ mpc85xx_setup_pci1(struct pci_controller *hose)
pci->powar1 = 0x80044000 |
(__ilog2(MPC85XX_PCI1_UPPER_MEM - MPC85XX_PCI1_LOWER_MEM + 1) - 1);
/* Setup outboud IO windows @ MPC85XX_PCI1_IO_BASE */
pci->potar2 = 0x00000000;
/* Setup outbound IO windows @ MPC85XX_PCI1_IO_BASE */
pci->potar2 = (MPC85XX_PCI1_LOWER_IO >> 12) & 0x000fffff;
pci->potear2 = 0x00000000;
pci->powbar2 = (MPC85XX_PCI1_IO_BASE >> 12) & 0x000fffff;
/* Enable, IO R/W */
@ -235,8 +235,8 @@ mpc85xx_setup_pci2(struct pci_controller *hose)
pci->powar1 = 0x80044000 |
(__ilog2(MPC85XX_PCI2_UPPER_MEM - MPC85XX_PCI2_LOWER_MEM + 1) - 1);
/* Setup outboud IO windows @ MPC85XX_PCI2_IO_BASE */
pci->potar2 = 0x00000000;
/* Setup outbound IO windows @ MPC85XX_PCI2_IO_BASE */
pci->potar2 = (MPC85XX_PCI2_LOWER_IO >> 12) & 0x000fffff;;
pci->potear2 = 0x00000000;
pci->powbar2 = (MPC85XX_PCI2_IO_BASE >> 12) & 0x000fffff;
/* Enable, IO R/W */

View File

@ -669,6 +669,7 @@ void class_device_destroy(struct class *cls, dev_t devt)
int class_device_rename(struct class_device *class_dev, char *new_name)
{
int error = 0;
char *old_class_name = NULL, *new_class_name = NULL;
class_dev = class_device_get(class_dev);
if (!class_dev)
@ -677,12 +678,24 @@ int class_device_rename(struct class_device *class_dev, char *new_name)
pr_debug("CLASS: renaming '%s' to '%s'\n", class_dev->class_id,
new_name);
if (class_dev->dev)
old_class_name = make_class_name(class_dev);
strlcpy(class_dev->class_id, new_name, KOBJ_NAME_LEN);
error = kobject_rename(&class_dev->kobj, new_name);
if (class_dev->dev) {
new_class_name = make_class_name(class_dev);
sysfs_create_link(&class_dev->dev->kobj, &class_dev->kobj,
new_class_name);
sysfs_remove_link(&class_dev->dev->kobj, old_class_name);
}
class_device_put(class_dev);
kfree(old_class_name);
kfree(new_class_name);
return error;
}

View File

@ -40,6 +40,9 @@
*/
void device_bind_driver(struct device * dev)
{
if (klist_node_attached(&dev->knode_driver))
return;
pr_debug("bound device '%s' to driver '%s'\n",
dev->bus_id, dev->driver->name);
klist_add_tail(&dev->knode_driver, &dev->driver->klist_devices);

View File

@ -172,7 +172,7 @@ struct bulk_cs_wrap {
*/
struct ub_dev;
#define UB_MAX_REQ_SG 4
#define UB_MAX_REQ_SG 9 /* cdrecord requires 32KB and maybe a header */
#define UB_MAX_SECTORS 64
/*
@ -387,7 +387,7 @@ struct ub_dev {
struct bulk_cs_wrap work_bcs;
struct usb_ctrlrequest work_cr;
int sg_stat[UB_MAX_REQ_SG+1];
int sg_stat[6];
struct ub_scsi_trace tr;
};
@ -525,12 +525,13 @@ static ssize_t ub_diag_show(struct device *dev, struct device_attribute *attr,
"qlen %d qmax %d\n",
sc->cmd_queue.qlen, sc->cmd_queue.qmax);
cnt += sprintf(page + cnt,
"sg %d %d %d %d %d\n",
"sg %d %d %d %d %d .. %d\n",
sc->sg_stat[0],
sc->sg_stat[1],
sc->sg_stat[2],
sc->sg_stat[3],
sc->sg_stat[4]);
sc->sg_stat[4],
sc->sg_stat[5]);
list_for_each (p, &sc->luns) {
lun = list_entry(p, struct ub_lun, link);
@ -835,7 +836,7 @@ static int ub_cmd_build_block(struct ub_dev *sc, struct ub_lun *lun,
return -1;
}
cmd->nsg = n_elem;
sc->sg_stat[n_elem]++;
sc->sg_stat[n_elem < 5 ? n_elem : 5]++;
/*
* build the command
@ -891,7 +892,7 @@ static int ub_cmd_build_packet(struct ub_dev *sc, struct ub_lun *lun,
return -1;
}
cmd->nsg = n_elem;
sc->sg_stat[n_elem]++;
sc->sg_stat[n_elem < 5 ? n_elem : 5]++;
memcpy(&cmd->cdb, rq->cmd, rq->cmd_len);
cmd->cdb_len = rq->cmd_len;
@ -1010,7 +1011,6 @@ static int ub_scsi_cmd_start(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
sc->last_pipe = sc->send_bulk_pipe;
usb_fill_bulk_urb(&sc->work_urb, sc->dev, sc->send_bulk_pipe,
bcb, US_BULK_CB_WRAP_LEN, ub_urb_complete, sc);
sc->work_urb.transfer_flags = 0;
/* Fill what we shouldn't be filling, because usb-storage did so. */
sc->work_urb.actual_length = 0;
@ -1019,7 +1019,6 @@ static int ub_scsi_cmd_start(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) {
/* XXX Clear stalls */
printk("ub: cmd #%d start failed (%d)\n", cmd->tag, rc); /* P3 */
ub_complete(&sc->work_done);
return rc;
}
@ -1190,11 +1189,9 @@ static void ub_scsi_urb_compl(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
return;
}
if (urb->status != 0) {
printk("ub: cmd #%d cmd status (%d)\n", cmd->tag, urb->status); /* P3 */
goto Bad_End;
}
if (urb->actual_length != US_BULK_CB_WRAP_LEN) {
printk("ub: cmd #%d xferred %d\n", cmd->tag, urb->actual_length); /* P3 */
/* XXX Must do reset here to unconfuse the device */
goto Bad_End;
}
@ -1395,14 +1392,12 @@ static void ub_data_start(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
usb_fill_bulk_urb(&sc->work_urb, sc->dev, pipe,
page_address(sg->page) + sg->offset, sg->length,
ub_urb_complete, sc);
sc->work_urb.transfer_flags = 0;
sc->work_urb.actual_length = 0;
sc->work_urb.error_count = 0;
sc->work_urb.status = 0;
if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) {
/* XXX Clear stalls */
printk("ub: data #%d submit failed (%d)\n", cmd->tag, rc); /* P3 */
ub_complete(&sc->work_done);
ub_state_done(sc, cmd, rc);
return;
@ -1442,7 +1437,6 @@ static int __ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
sc->last_pipe = sc->recv_bulk_pipe;
usb_fill_bulk_urb(&sc->work_urb, sc->dev, sc->recv_bulk_pipe,
&sc->work_bcs, US_BULK_CS_WRAP_LEN, ub_urb_complete, sc);
sc->work_urb.transfer_flags = 0;
sc->work_urb.actual_length = 0;
sc->work_urb.error_count = 0;
sc->work_urb.status = 0;
@ -1563,7 +1557,6 @@ static int ub_submit_clear_stall(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
usb_fill_control_urb(&sc->work_urb, sc->dev, sc->send_ctrl_pipe,
(unsigned char*) cr, NULL, 0, ub_urb_complete, sc);
sc->work_urb.transfer_flags = 0;
sc->work_urb.actual_length = 0;
sc->work_urb.error_count = 0;
sc->work_urb.status = 0;
@ -2000,17 +1993,16 @@ static int ub_sync_getmaxlun(struct ub_dev *sc)
usb_fill_control_urb(&sc->work_urb, sc->dev, sc->recv_ctrl_pipe,
(unsigned char*) cr, p, 1, ub_probe_urb_complete, &compl);
sc->work_urb.transfer_flags = 0;
sc->work_urb.actual_length = 0;
sc->work_urb.error_count = 0;
sc->work_urb.status = 0;
if ((rc = usb_submit_urb(&sc->work_urb, GFP_KERNEL)) != 0) {
if (rc == -EPIPE) {
printk("%s: Stall at GetMaxLUN, using 1 LUN\n",
printk("%s: Stall submitting GetMaxLUN, using 1 LUN\n",
sc->name); /* P3 */
} else {
printk(KERN_WARNING
printk(KERN_NOTICE
"%s: Unable to submit GetMaxLUN (%d)\n",
sc->name, rc);
}
@ -2028,6 +2020,18 @@ static int ub_sync_getmaxlun(struct ub_dev *sc)
del_timer_sync(&timer);
usb_kill_urb(&sc->work_urb);
if ((rc = sc->work_urb.status) < 0) {
if (rc == -EPIPE) {
printk("%s: Stall at GetMaxLUN, using 1 LUN\n",
sc->name); /* P3 */
} else {
printk(KERN_NOTICE
"%s: Error at GetMaxLUN (%d)\n",
sc->name, rc);
}
goto err_io;
}
if (sc->work_urb.actual_length != 1) {
printk("%s: GetMaxLUN returned %d bytes\n", sc->name,
sc->work_urb.actual_length); /* P3 */
@ -2048,6 +2052,7 @@ static int ub_sync_getmaxlun(struct ub_dev *sc)
kfree(p);
return nluns;
err_io:
err_submit:
kfree(p);
err_alloc:
@ -2080,7 +2085,6 @@ static int ub_probe_clear_stall(struct ub_dev *sc, int stalled_pipe)
usb_fill_control_urb(&sc->work_urb, sc->dev, sc->send_ctrl_pipe,
(unsigned char*) cr, NULL, 0, ub_probe_urb_complete, &compl);
sc->work_urb.transfer_flags = 0;
sc->work_urb.actual_length = 0;
sc->work_urb.error_count = 0;
sc->work_urb.status = 0;
@ -2213,8 +2217,10 @@ static int ub_probe(struct usb_interface *intf,
* This is needed to clear toggles. It is a problem only if we do
* `rmmod ub && modprobe ub` without disconnects, but we like that.
*/
#if 0 /* iPod Mini fails if we do this (big white iPod works) */
ub_probe_clear_stall(sc, sc->recv_bulk_pipe);
ub_probe_clear_stall(sc, sc->send_bulk_pipe);
#endif
/*
* The way this is used by the startup code is a little specific.
@ -2241,10 +2247,10 @@ static int ub_probe(struct usb_interface *intf,
for (i = 0; i < 3; i++) {
if ((rc = ub_sync_getmaxlun(sc)) < 0) {
/*
* Some devices (i.e. Iomega Zip100) need this --
* apparently the bulk pipes get STALLed when the
* GetMaxLUN request is processed.
* XXX I have a ZIP-100, verify it does this.
* This segment is taken from usb-storage. They say
* that ZIP-100 needs this, but my own ZIP-100 works
* fine without this.
* Still, it does not seem to hurt anything.
*/
if (rc == -EPIPE) {
ub_probe_clear_stall(sc, sc->recv_bulk_pipe);
@ -2313,7 +2319,7 @@ static int ub_probe_lun(struct ub_dev *sc, int lnum)
disk->first_minor = lun->id * UB_MINORS_PER_MAJOR;
disk->fops = &ub_bd_fops;
disk->private_data = lun;
disk->driverfs_dev = &sc->intf->dev; /* XXX Many to one ok? */
disk->driverfs_dev = &sc->intf->dev;
rc = -ENOMEM;
if ((q = blk_init_queue(ub_request_fn, &sc->lock)) == NULL)
@ -2466,9 +2472,6 @@ static int __init ub_init(void)
{
int rc;
/* P3 */ printk("ub: sizeof ub_scsi_cmd %zu ub_dev %zu ub_lun %zu\n",
sizeof(struct ub_scsi_cmd), sizeof(struct ub_dev), sizeof(struct ub_lun));
if ((rc = register_blkdev(UB_MAJOR, DRV_NAME)) != 0)
goto err_regblkdev;
devfs_mk_dir(DEVFS_NAME);

View File

@ -1653,7 +1653,8 @@ static int bond_enslave(struct net_device *bond_dev, struct net_device *slave_de
int old_features = bond_dev->features;
int res = 0;
if (slave_dev->do_ioctl == NULL) {
if (!bond->params.use_carrier && slave_dev->ethtool_ops == NULL &&
slave_dev->do_ioctl == NULL) {
printk(KERN_WARNING DRV_NAME
": Warning : no link monitoring support for %s\n",
slave_dev->name);

View File

@ -100,11 +100,11 @@ VERSION 2.2LK <2005/01/25>
#ifdef CONFIG_R8169_NAPI
#define rtl8169_rx_skb netif_receive_skb
#define rtl8169_rx_hwaccel_skb vlan_hwaccel_rx
#define rtl8169_rx_hwaccel_skb vlan_hwaccel_receive_skb
#define rtl8169_rx_quota(count, quota) min(count, quota)
#else
#define rtl8169_rx_skb netif_rx
#define rtl8169_rx_hwaccel_skb vlan_hwaccel_receive_skb
#define rtl8169_rx_hwaccel_skb vlan_hwaccel_rx
#define rtl8169_rx_quota(count, quota) count
#endif

View File

@ -42,7 +42,7 @@
#include "skge.h"
#define DRV_NAME "skge"
#define DRV_VERSION "1.0"
#define DRV_VERSION "1.1"
#define PFX DRV_NAME " "
#define DEFAULT_TX_RING_SIZE 128
@ -105,41 +105,28 @@ static const u32 rxirqmask[] = { IS_R1_F, IS_R2_F };
static const u32 txirqmask[] = { IS_XA1_F, IS_XA2_F };
static const u32 portirqmask[] = { IS_PORT_1, IS_PORT_2 };
/* Don't need to look at whole 16K.
* last interesting register is descriptor poll timer.
*/
#define SKGE_REGS_LEN (29*128)
static int skge_get_regs_len(struct net_device *dev)
{
return SKGE_REGS_LEN;
return 0x4000;
}
/*
* Returns copy of control register region
* I/O region is divided into banks and certain regions are unreadable
* Returns copy of whole control register region
* Note: skip RAM address register because accessing it will
* cause bus hangs!
*/
static void skge_get_regs(struct net_device *dev, struct ethtool_regs *regs,
void *p)
{
const struct skge_port *skge = netdev_priv(dev);
unsigned long offs;
const void __iomem *io = skge->hw->regs;
static const unsigned long bankmap
= (1<<0) | (1<<2) | (1<<8) | (1<<9)
| (1<<12) | (1<<13) | (1<<14) | (1<<15) | (1<<16)
| (1<<17) | (1<<20) | (1<<21) | (1<<22) | (1<<23)
| (1<<24) | (1<<25) | (1<<26) | (1<<27) | (1<<28);
regs->version = 1;
for (offs = 0; offs < regs->len; offs += 128) {
u32 len = min_t(u32, 128, regs->len - offs);
memset(p, 0, regs->len);
memcpy_fromio(p, io, B3_RAM_ADDR);
if (bankmap & (1<<(offs/128)))
memcpy_fromio(p + offs, io + offs, len);
else
memset(p + offs, 0, len);
}
memcpy_fromio(p + B3_RI_WTO_R1, io + B3_RI_WTO_R1,
regs->len - B3_RI_WTO_R1);
}
/* Wake on Lan only supported on Yukon chps with rev 1 or above */
@ -776,17 +763,6 @@ static int skge_ring_alloc(struct skge_ring *ring, void *vaddr, u64 base)
return 0;
}
static struct sk_buff *skge_rx_alloc(struct net_device *dev, unsigned int size)
{
struct sk_buff *skb = dev_alloc_skb(size);
if (likely(skb)) {
skb->dev = dev;
skb_reserve(skb, NET_IP_ALIGN);
}
return skb;
}
/* Allocate and setup a new buffer for receiving */
static void skge_rx_setup(struct skge_port *skge, struct skge_element *e,
struct sk_buff *skb, unsigned int bufsize)
@ -859,16 +835,17 @@ static int skge_rx_fill(struct skge_port *skge)
{
struct skge_ring *ring = &skge->rx_ring;
struct skge_element *e;
unsigned int bufsize = skge->rx_buf_size;
e = ring->start;
do {
struct sk_buff *skb = skge_rx_alloc(skge->netdev, bufsize);
struct sk_buff *skb;
skb = dev_alloc_skb(skge->rx_buf_size + NET_IP_ALIGN);
if (!skb)
return -ENOMEM;
skge_rx_setup(skge, e, skb, bufsize);
skb_reserve(skb, NET_IP_ALIGN);
skge_rx_setup(skge, e, skb, skge->rx_buf_size);
} while ( (e = e->next) != ring->start);
ring->to_clean = ring->start;
@ -2443,6 +2420,14 @@ static void yukon_set_multicast(struct net_device *dev)
gma_write16(hw, port, GM_RX_CTRL, reg);
}
static inline u16 phy_length(const struct skge_hw *hw, u32 status)
{
if (hw->chip_id == CHIP_ID_GENESIS)
return status >> XMR_FS_LEN_SHIFT;
else
return status >> GMR_FS_LEN_SHIFT;
}
static inline int bad_phy_status(const struct skge_hw *hw, u32 status)
{
if (hw->chip_id == CHIP_ID_GENESIS)
@ -2452,16 +2437,81 @@ static inline int bad_phy_status(const struct skge_hw *hw, u32 status)
(status & GMR_FS_RX_OK) == 0;
}
static void skge_rx_error(struct skge_port *skge, int slot,
u32 control, u32 status)
/* Get receive buffer from descriptor.
* Handles copy of small buffers and reallocation failures
*/
static inline struct sk_buff *skge_rx_get(struct skge_port *skge,
struct skge_element *e,
u32 control, u32 status, u16 csum)
{
if (netif_msg_rx_err(skge))
printk(KERN_DEBUG PFX "%s: rx err, slot %d control 0x%x status 0x%x\n",
skge->netdev->name, slot, control, status);
struct sk_buff *skb;
u16 len = control & BMU_BBC;
if (unlikely(netif_msg_rx_status(skge)))
printk(KERN_DEBUG PFX "%s: rx slot %td status 0x%x len %d\n",
skge->netdev->name, e - skge->rx_ring.start,
status, len);
if (len > skge->rx_buf_size)
goto error;
if ((control & (BMU_EOF|BMU_STF)) != (BMU_STF|BMU_EOF))
skge->net_stats.rx_length_errors++;
else if (skge->hw->chip_id == CHIP_ID_GENESIS) {
goto error;
if (bad_phy_status(skge->hw, status))
goto error;
if (phy_length(skge->hw, status) != len)
goto error;
if (len < RX_COPY_THRESHOLD) {
skb = dev_alloc_skb(len + 2);
if (!skb)
goto resubmit;
skb_reserve(skb, 2);
pci_dma_sync_single_for_cpu(skge->hw->pdev,
pci_unmap_addr(e, mapaddr),
len, PCI_DMA_FROMDEVICE);
memcpy(skb->data, e->skb->data, len);
pci_dma_sync_single_for_device(skge->hw->pdev,
pci_unmap_addr(e, mapaddr),
len, PCI_DMA_FROMDEVICE);
skge_rx_reuse(e, skge->rx_buf_size);
} else {
struct sk_buff *nskb;
nskb = dev_alloc_skb(skge->rx_buf_size + NET_IP_ALIGN);
if (!nskb)
goto resubmit;
pci_unmap_single(skge->hw->pdev,
pci_unmap_addr(e, mapaddr),
pci_unmap_len(e, maplen),
PCI_DMA_FROMDEVICE);
skb = e->skb;
prefetch(skb->data);
skge_rx_setup(skge, e, nskb, skge->rx_buf_size);
}
skb_put(skb, len);
skb->dev = skge->netdev;
if (skge->rx_csum) {
skb->csum = csum;
skb->ip_summed = CHECKSUM_HW;
}
skb->protocol = eth_type_trans(skb, skge->netdev);
return skb;
error:
if (netif_msg_rx_err(skge))
printk(KERN_DEBUG PFX "%s: rx err, slot %td control 0x%x status 0x%x\n",
skge->netdev->name, e - skge->rx_ring.start,
control, status);
if (skge->hw->chip_id == CHIP_ID_GENESIS) {
if (status & (XMR_FS_RUNT|XMR_FS_LNG_ERR))
skge->net_stats.rx_length_errors++;
if (status & XMR_FS_FRA_ERR)
@ -2476,56 +2526,10 @@ static void skge_rx_error(struct skge_port *skge, int slot,
if (status & GMR_FS_CRC_ERR)
skge->net_stats.rx_crc_errors++;
}
}
/* Get receive buffer from descriptor.
* Handles copy of small buffers and reallocation failures
*/
static inline struct sk_buff *skge_rx_get(struct skge_port *skge,
struct skge_element *e,
unsigned int len)
{
struct sk_buff *nskb, *skb;
if (len < RX_COPY_THRESHOLD) {
nskb = skge_rx_alloc(skge->netdev, len + NET_IP_ALIGN);
if (unlikely(!nskb))
return NULL;
pci_dma_sync_single_for_cpu(skge->hw->pdev,
pci_unmap_addr(e, mapaddr),
len, PCI_DMA_FROMDEVICE);
memcpy(nskb->data, e->skb->data, len);
pci_dma_sync_single_for_device(skge->hw->pdev,
pci_unmap_addr(e, mapaddr),
len, PCI_DMA_FROMDEVICE);
if (skge->rx_csum) {
struct skge_rx_desc *rd = e->desc;
nskb->csum = le16_to_cpu(rd->csum2);
nskb->ip_summed = CHECKSUM_HW;
}
skge_rx_reuse(e, skge->rx_buf_size);
return nskb;
} else {
nskb = skge_rx_alloc(skge->netdev, skge->rx_buf_size);
if (unlikely(!nskb))
return NULL;
pci_unmap_single(skge->hw->pdev,
pci_unmap_addr(e, mapaddr),
pci_unmap_len(e, maplen),
PCI_DMA_FROMDEVICE);
skb = e->skb;
if (skge->rx_csum) {
struct skge_rx_desc *rd = e->desc;
skb->csum = le16_to_cpu(rd->csum2);
skb->ip_summed = CHECKSUM_HW;
}
skge_rx_setup(skge, e, nskb, skge->rx_buf_size);
return skb;
}
resubmit:
skge_rx_reuse(e, skge->rx_buf_size);
return NULL;
}
@ -2541,32 +2545,16 @@ static int skge_poll(struct net_device *dev, int *budget)
for (e = ring->to_clean; work_done < to_do; e = e->next) {
struct skge_rx_desc *rd = e->desc;
struct sk_buff *skb;
u32 control, len, status;
u32 control;
rmb();
control = rd->control;
if (control & BMU_OWN)
break;
len = control & BMU_BBC;
status = rd->status;
if (unlikely((control & (BMU_EOF|BMU_STF)) != (BMU_STF|BMU_EOF)
|| bad_phy_status(hw, status))) {
skge_rx_error(skge, e - ring->start, control, status);
skge_rx_reuse(e, skge->rx_buf_size);
continue;
}
if (netif_msg_rx_status(skge))
printk(KERN_DEBUG PFX "%s: rx slot %td status 0x%x len %d\n",
dev->name, e - ring->start, rd->status, len);
skb = skge_rx_get(skge, e, len);
skb = skge_rx_get(skge, e, control, rd->status,
le16_to_cpu(rd->csum2));
if (likely(skb)) {
skb_put(skb, len);
skb->protocol = eth_type_trans(skb, dev);
dev->last_rx = jiffies;
netif_receive_skb(skb);

View File

@ -953,6 +953,7 @@ enum {
*/
enum {
XMR_FS_LEN = 0x3fff<<18, /* Bit 31..18: Rx Frame Length */
XMR_FS_LEN_SHIFT = 18,
XMR_FS_2L_VLAN = 1<<17, /* Bit 17: tagged wh 2Lev VLAN ID*/
XMR_FS_1_VLAN = 1<<16, /* Bit 16: tagged wh 1ev VLAN ID*/
XMR_FS_BC = 1<<15, /* Bit 15: Broadcast Frame */
@ -1868,6 +1869,7 @@ enum {
/* Receive Frame Status Encoding */
enum {
GMR_FS_LEN = 0xffff<<16, /* Bit 31..16: Rx Frame Length */
GMR_FS_LEN_SHIFT = 16,
GMR_FS_VLAN = 1<<13, /* Bit 13: VLAN Packet */
GMR_FS_JABBER = 1<<12, /* Bit 12: Jabber Packet */
GMR_FS_UN_SIZE = 1<<11, /* Bit 11: Undersize Packet */

View File

@ -7,7 +7,6 @@ int pci_hotplug (struct device *dev, char **envp, int num_envp,
char *buffer, int buffer_size)
{
struct pci_dev *pdev;
char *scratch;
int i = 0;
int length = 0;
@ -18,9 +17,6 @@ int pci_hotplug (struct device *dev, char **envp, int num_envp,
if (!pdev)
return -ENODEV;
scratch = buffer;
if (add_hotplug_env_var(envp, num_envp, &i,
buffer, buffer_size, &length,
"PCI_CLASS=%04X", pdev->class))

View File

@ -62,7 +62,7 @@ static ssize_t add_slot_store(struct dlpar_io_attr *dlpar_attr,
char drc_name[MAX_DRC_NAME_LEN];
char *end;
if (nbytes > MAX_DRC_NAME_LEN)
if (nbytes >= MAX_DRC_NAME_LEN)
return 0;
memcpy(drc_name, buf, nbytes);
@ -83,7 +83,7 @@ static ssize_t remove_slot_store(struct dlpar_io_attr *dlpar_attr,
char drc_name[MAX_DRC_NAME_LEN];
char *end;
if (nbytes > MAX_DRC_NAME_LEN)
if (nbytes >= MAX_DRC_NAME_LEN)
return 0;
memcpy(drc_name, buf, nbytes);

View File

@ -159,7 +159,7 @@ static int sn_hp_slot_private_alloc(struct hotplug_slot *bss_hotplug_slot,
pcibus_info = SN_PCIBUS_BUSSOFT_INFO(pci_bus);
slot = kcalloc(1, sizeof(*slot), GFP_KERNEL);
slot = kzalloc(sizeof(*slot), GFP_KERNEL);
if (!slot)
return -ENOMEM;
bss_hotplug_slot->private = slot;
@ -491,7 +491,7 @@ static int sn_hotplug_slot_register(struct pci_bus *pci_bus)
if (sn_pci_slot_valid(pci_bus, device) != 1)
continue;
bss_hotplug_slot = kcalloc(1, sizeof(*bss_hotplug_slot),
bss_hotplug_slot = kzalloc(sizeof(*bss_hotplug_slot),
GFP_KERNEL);
if (!bss_hotplug_slot) {
rc = -ENOMEM;
@ -499,7 +499,7 @@ static int sn_hotplug_slot_register(struct pci_bus *pci_bus)
}
bss_hotplug_slot->info =
kcalloc(1, sizeof(struct hotplug_slot_info),
kzalloc(sizeof(struct hotplug_slot_info),
GFP_KERNEL);
if (!bss_hotplug_slot->info) {
rc = -ENOMEM;

View File

@ -360,7 +360,7 @@ pci_create_resource_files(struct pci_dev *pdev)
continue;
/* allocate attribute structure, piggyback attribute name */
res_attr = kcalloc(1, sizeof(*res_attr) + 10, GFP_ATOMIC);
res_attr = kzalloc(sizeof(*res_attr) + 10, GFP_ATOMIC);
if (res_attr) {
char *res_attr_name = (char *)(res_attr + 1);

View File

@ -165,7 +165,7 @@ static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom)
if (l == 0xffffffff)
l = 0;
if ((l & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_MEMORY) {
sz = pci_size(l, sz, PCI_BASE_ADDRESS_MEM_MASK);
sz = pci_size(l, sz, (u32)PCI_BASE_ADDRESS_MEM_MASK);
if (!sz)
continue;
res->start = l & PCI_BASE_ADDRESS_MEM_MASK;
@ -215,7 +215,7 @@ static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom)
if (l == 0xffffffff)
l = 0;
if (sz && sz != 0xffffffff) {
sz = pci_size(l, sz, PCI_ROM_ADDRESS_MASK);
sz = pci_size(l, sz, (u32)PCI_ROM_ADDRESS_MASK);
if (sz) {
res->flags = (l & IORESOURCE_ROM_ENABLE) |
IORESOURCE_MEM | IORESOURCE_PREFETCH |

View File

@ -437,7 +437,7 @@ __ccwgroup_get_gdev_by_cdev(struct ccw_device *cdev)
if (cdev->dev.driver_data) {
gdev = (struct ccwgroup_device *)cdev->dev.driver_data;
if (get_device(&gdev->dev)) {
if (klist_node_attached(&gdev->dev.knode_bus))
if (device_is_registered(&gdev->dev))
return gdev;
put_device(&gdev->dev);
}

View File

@ -987,7 +987,7 @@ void usb_disable_device(struct usb_device *dev, int skip_ep0)
/* remove this interface if it has been registered */
interface = dev->actconfig->interface[i];
if (!klist_node_attached(&interface->dev.knode_bus))
if (!device_is_registered(&interface->dev))
continue;
dev_dbg (&dev->dev, "unregistering interface %s\n",
interface->dev.bus_id);

View File

@ -303,7 +303,7 @@ int usb_driver_claim_interface(struct usb_driver *driver,
/* if interface was already added, bind now; else let
* the future device_add() bind it, bypassing probe()
*/
if (klist_node_attached(&dev->knode_bus))
if (device_is_registered(dev))
device_bind_driver(dev);
return 0;
@ -336,8 +336,8 @@ void usb_driver_release_interface(struct usb_driver *driver,
if (iface->condition != USB_INTERFACE_BOUND)
return;
/* release only after device_add() */
if (klist_node_attached(&dev->knode_bus)) {
/* don't release if the interface hasn't been added yet */
if (device_is_registered(dev)) {
iface->condition = USB_INTERFACE_UNBINDING;
device_release_driver(dev);
}

View File

@ -422,7 +422,7 @@ static inline void ep0_idle (struct pxa2xx_udc *dev)
}
static int
write_packet(volatile u32 *uddr, struct pxa2xx_request *req, unsigned max)
write_packet(volatile unsigned long *uddr, struct pxa2xx_request *req, unsigned max)
{
u8 *buf;
unsigned length, count;
@ -2602,7 +2602,7 @@ static int __exit pxa2xx_udc_remove(struct device *_dev)
* VBUS IRQs should probably be ignored so that the PXA device just acts
* "dead" to USB hosts until system resume.
*/
static int pxa2xx_udc_suspend(struct device *dev, u32 state, u32 level)
static int pxa2xx_udc_suspend(struct device *dev, pm_message_t state, u32 level)
{
struct pxa2xx_udc *udc = dev_get_drvdata(dev);

View File

@ -69,11 +69,11 @@ struct pxa2xx_ep {
* UDDR = UDC Endpoint Data Register (the fifo)
* DRCM = DMA Request Channel Map
*/
volatile u32 *reg_udccs;
volatile u32 *reg_ubcr;
volatile u32 *reg_uddr;
volatile unsigned long *reg_udccs;
volatile unsigned long *reg_ubcr;
volatile unsigned long *reg_uddr;
#ifdef USE_DMA
volatile u32 *reg_drcmr;
volatile unsigned long *reg_drcmr;
#define drcmr(n) .reg_drcmr = & DRCMR ## n ,
#else
#define drcmr(n)

View File

@ -782,6 +782,9 @@ retry:
/* usb 1.1 says max 90% of a frame is available for periodic transfers.
* this driver doesn't promise that much since it's got to handle an
* IRQ per packet; irq handling latencies also use up that time.
*
* NOTE: the periodic schedule is a sparse tree, with the load for
* each branch minimized. see fig 3.5 in the OHCI spec for example.
*/
#define MAX_PERIODIC_LOAD 500 /* out of 1000 usec */
@ -843,6 +846,7 @@ static int sl811h_urb_enqueue(
if (!(sl811->port1 & (1 << USB_PORT_FEAT_ENABLE))
|| !HC_IS_RUNNING(hcd->state)) {
retval = -ENODEV;
kfree(ep);
goto fail;
}
@ -911,8 +915,16 @@ static int sl811h_urb_enqueue(
case PIPE_ISOCHRONOUS:
case PIPE_INTERRUPT:
urb->interval = ep->period;
if (ep->branch < PERIODIC_SIZE)
if (ep->branch < PERIODIC_SIZE) {
/* NOTE: the phase is correct here, but the value
* needs offsetting by the transfer queue depth.
* All current drivers ignore start_frame, so this
* is unlikely to ever matter...
*/
urb->start_frame = (sl811->frame & (PERIODIC_SIZE - 1))
+ ep->branch;
break;
}
retval = balance(sl811, ep->period, ep->load);
if (retval < 0)
@ -1122,7 +1134,7 @@ sl811h_hub_descriptor (
desc->wHubCharacteristics = (__force __u16)cpu_to_le16(temp);
/* two bitmaps: ports removable, and legacy PortPwrCtrlMask */
desc->bitmap[0] = 1 << 1;
desc->bitmap[0] = 0 << 1;
desc->bitmap[1] = ~0;
}

View File

@ -647,6 +647,13 @@ static void read_bulk_callback(struct urb *urb, struct pt_regs *regs)
pkt_len -= 8;
}
/*
* If the packet is unreasonably long, quietly drop it rather than
* kernel panicing by calling skb_put.
*/
if (pkt_len > PEGASUS_MTU)
goto goon;
/*
* at this point we are sure pegasus->rx_skb != NULL
* so we go ahead and pass up the packet.
@ -886,15 +893,17 @@ static inline void get_interrupt_interval(pegasus_t * pegasus)
__u8 data[2];
read_eprom_word(pegasus, 4, (__u16 *) data);
if (data[1] < 0x80) {
if (netif_msg_timer(pegasus))
dev_info(&pegasus->intf->dev,
"intr interval changed from %ums to %ums\n",
data[1], 0x80);
data[1] = 0x80;
#ifdef PEGASUS_WRITE_EEPROM
write_eprom_word(pegasus, 4, *(__u16 *) data);
if (pegasus->usb->speed != USB_SPEED_HIGH) {
if (data[1] < 0x80) {
if (netif_msg_timer(pegasus))
dev_info(&pegasus->intf->dev, "intr interval "
"changed from %ums to %ums\n",
data[1], 0x80);
data[1] = 0x80;
#ifdef PEGASUS_WRITE_EEPROM
write_eprom_word(pegasus, 4, *(__u16 *) data);
#endif
}
}
pegasus->intr_interval = data[1];
}
@ -904,8 +913,9 @@ static void set_carrier(struct net_device *net)
pegasus_t *pegasus = netdev_priv(net);
u16 tmp;
if (read_mii_word(pegasus, pegasus->phy, MII_BMSR, &tmp))
if (!read_mii_word(pegasus, pegasus->phy, MII_BMSR, &tmp))
return;
if (tmp & BMSR_LSTATUS)
netif_carrier_on(net);
else
@ -1355,6 +1365,7 @@ static void pegasus_disconnect(struct usb_interface *intf)
cancel_delayed_work(&pegasus->carrier_check);
unregister_netdev(pegasus->net);
usb_put_dev(interface_to_usbdev(intf));
unlink_all_urbs(pegasus);
free_all_urbs(pegasus);
free_skb_pool(pegasus);
if (pegasus->rx_skb)

View File

@ -16,7 +16,8 @@
#include "usb-serial.h"
static struct usb_device_id id_table [] = {
{ USB_DEVICE(0xf3d, 0x0112) },
{ USB_DEVICE(0xf3d, 0x0112) }, /* AirPrime CDMA Wireless PC Card */
{ USB_DEVICE(0x1410, 0x1110) }, /* Novatel Wireless Merlin CDMA */
{ },
};
MODULE_DEVICE_TABLE(usb, id_table);

View File

@ -1846,10 +1846,12 @@ static void ftdi_set_termios (struct usb_serial_port *port, struct termios *old_
} else {
/* set the baudrate determined before */
if (change_speed(port)) {
err("%s urb failed to set baurdrate", __FUNCTION__);
err("%s urb failed to set baudrate", __FUNCTION__);
}
/* Ensure RTS and DTR are raised when baudrate changed from 0 */
if ((old_termios->c_cflag & CBAUD) == B0) {
set_mctrl(port, TIOCM_DTR | TIOCM_RTS);
}
/* Ensure RTS and DTR are raised */
set_mctrl(port, TIOCM_DTR | TIOCM_RTS);
}
/* Set flow control */

View File

@ -25,6 +25,9 @@
2005-06-20 v0.4.1 add missing braces :-/
killed end-of-line whitespace
2005-07-15 v0.4.2 rename WLAN product to FUSION, add FUSION2
2005-09-10 v0.4.3 added HUAWEI E600 card and Audiovox AirCard
2005-09-20 v0.4.4 increased recv buffer size: the card sometimes
wants to send >2000 bytes.
Work sponsored by: Sigos GmbH, Germany <info@sigos.de>
@ -71,15 +74,21 @@ static int option_send_setup(struct usb_serial_port *port);
/* Vendor and product IDs */
#define OPTION_VENDOR_ID 0x0AF0
#define HUAWEI_VENDOR_ID 0x12D1
#define AUDIOVOX_VENDOR_ID 0x0F3D
#define OPTION_PRODUCT_OLD 0x5000
#define OPTION_PRODUCT_FUSION 0x6000
#define OPTION_PRODUCT_FUSION2 0x6300
#define HUAWEI_PRODUCT_E600 0x1001
#define AUDIOVOX_PRODUCT_AIRCARD 0x0112
static struct usb_device_id option_ids[] = {
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_OLD) },
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION) },
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION2) },
{ USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E600) },
{ USB_DEVICE(AUDIOVOX_VENDOR_ID, AUDIOVOX_PRODUCT_AIRCARD) },
{ } /* Terminating entry */
};
@ -132,7 +141,7 @@ static int debug;
#define N_IN_URB 4
#define N_OUT_URB 1
#define IN_BUFLEN 1024
#define IN_BUFLEN 4096
#define OUT_BUFLEN 128
struct option_port_private {

View File

@ -317,6 +317,11 @@ dev_set_drvdata (struct device *dev, void *data)
dev->driver_data = data;
}
static inline int device_is_registered(struct device *dev)
{
return klist_node_attached(&dev->knode_bus);
}
/*
* High level routines for use by the bus drivers
*/