- documents the kernel probes debugging feature.
kref.txt
- docs on adding reference counters (krefs) to kernel objects.
-laptop-mode.txt
- - how to conserve battery power using laptop-mode.
laptops/
- directory with laptop related info and laptop driver documentation.
ldm.txt
- info on the Linux PCMCIA driver.
pi-futex.txt
- documentation on lightweight PI-futexes.
-pm.txt
- - info on Linux power management support.
pnp.txt
- Linux Plug and Play documentation.
-power_supply_class.txt
- - Tells userspace about battery, UPS, AC or DC power supply properties
power/
- directory with info on Linux PCI power management.
powerpc/
-Linux supports two methods of overriding the BIOS DSDT:
+Linux supports a method of overriding the BIOS DSDT:
CONFIG_ACPI_CUSTOM_DSDT builds the image into the kernel.
-CONFIG_ACPI_CUSTOM_DSDT_INITRD adds the image to the initrd.
-
-When to use these methods is described in detail on the
+When to use this method is described in detail on the
Linux/ACPI home page:
http://www.lesswatts.org/projects/acpi/overridingDSDT.php
-
-Note that if both options are used, the DSDT supplied
-by the INITRD method takes precedence.
-
-Documentation/initramfs-add-dsdt.sh is provided for convenience
-for use with the CONFIG_ACPI_CUSTOM_DSDT_INITRD method.
+++ /dev/null
-#!/bin/bash
-# Adds a DSDT file to the initrd (if it's an initramfs)
-# first argument is the name of archive
-# second argument is the name of the file to add
-# The file will be copied as /DSDT.aml
-
-# 20060126: fix "Premature end of file" with some old cpio (Roland Robic)
-# 20060205: this time it should really work
-
-# check the arguments
-if [ $# -ne 2 ]; then
- program_name=$(basename $0)
- echo "\
-$program_name: too few arguments
-Usage: $program_name initrd-name.img DSDT-to-add.aml
-Adds a DSDT file to an initrd (in initramfs format)
-
- initrd-name.img: filename of the initrd in initramfs format
- DSDT-to-add.aml: filename of the DSDT file to add
- " 1>&2
- exit 1
-fi
-
-# we should check it's an initramfs
-
-tempcpio=$(mktemp -d)
-# cleanup on exit, hangup, interrupt, quit, termination
-trap 'rm -rf $tempcpio' 0 1 2 3 15
-
-# extract the archive
-gunzip -c "$1" > "$tempcpio"/initramfs.cpio || exit 1
-
-# copy the DSDT file at the root of the directory so that we can call it "/DSDT.aml"
-cp -f "$2" "$tempcpio"/DSDT.aml
-
-# add the file
-cd "$tempcpio"
-(echo DSDT.aml | cpio --quiet -H newc -o -A -O "$tempcpio"/initramfs.cpio) || exit 1
-cd "$OLDPWD"
-
-# re-compress the archive
-gzip -c "$tempcpio"/initramfs.cpio > "$1"
-
-----------
laptop_mode is a knob that controls "laptop mode". All the things that are
-controlled by this knob are discussed in Documentation/laptop-mode.txt.
+controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt.
block_dump
----------
block_dump enables block I/O debugging when set to a nonzero value. More
-information on block I/O debugging is in Documentation/laptop-mode.txt.
+information on block I/O debugging is in Documentation/laptops/laptop-mode.txt.
swap_token_timeout
------------------
strict -- Be less tolerant of platforms that are not
strictly ACPI specification compliant.
- See also Documentation/pm.txt, pci=noacpi
+ See also Documentation/power/pm.txt, pci=noacpi
acpi_apic_instance= [ACPI, IOAPIC]
Format: <int>
acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT
- acpi_no_initrd_override [KNL,ACPI]
- Disable loading custom ACPI tables from the initramfs
-
acpi_os_name= [HW,ACPI] Tell ACPI BIOS the name of the OS
Format: To spoof as Windows 98: ="Microsoft Windows"
- This file
acer-wmi.txt
- information on the Acer Laptop WMI Extras driver.
+laptop-mode.txt
+ - how to conserve battery power using laptop-mode.
sony-laptop.txt
- Sony Notebook Control Driver (SNC) Readme.
sonypi.txt
To send me the DSDT, as root/sudo:
-cat /sys/firmware/acpi/DSDT > dsdt
+cat /sys/firmware/acpi/tables/DSDT > dsdt
And send me the resulting 'dsdt' file.
The LED is exposed through the LED subsystem, and can be found in:
-/sys/devices/platform/acer-wmi/leds/acer-mail:green/
+/sys/devices/platform/acer-wmi/leds/acer-wmi::mail/
The mail LED is autodetected, so if you don't have one, the LED device won't
be registered.
- Registering suspend notifiers in device drivers
pci.txt
- How the PCI Subsystem Does Power Management
+pm.txt
+ - info on Linux power management support.
+pm_qos_interface.txt
+ - info on Linux PM Quality of Service interface
+power_supply_class.txt
+ - Tells userspace about battery, UPS, AC or DC power supply properties
s2ram.txt
- How to get suspend to ram working (and debug it when it isn't)
states.txt
* EINVAL if the request is not supported
* EBUSY if the device is now busy and cannot handle the request
* ENOMEM if the device was unable to handle the request due to memory
- *
+ *
* Details: The device request callback will be called before the
* device/system enters a suspend state (ACPI D1-D3) or
* or after the device/system resumes from suspend (ACPI D0).
This is given by thermal zone driver as part of registration.
Eg: "ACPI thermal zone" indicates it's a ACPI thermal device
RO
- Optional
+ Required
temp Current temperature as reported by thermal zone (sensor)
- Unit: degree Celsius
+ Unit: millidegree Celsius
RO
Required
charge of the thermal management.
trip_point_[0-*]_temp The temperature above which trip point will be fired
- Unit: degree Celsius
+ Unit: millidegree Celsius
RO
Optional
eg. For memory controller device on intel_menlow platform:
this should be "Memory controller"
RO
- Optional
+ Required
max_state The maximum permissible cooling state of this cooling device.
RO
|thermal_zone1:
|-----type: ACPI thermal zone
- |-----temp: 37
+ |-----temp: 37000
|-----mode: kernel
- |-----trip_point_0_temp: 100
+ |-----trip_point_0_temp: 100000
|-----trip_point_0_type: critical
- |-----trip_point_1_temp: 80
+ |-----trip_point_1_temp: 80000
|-----trip_point_1_type: passive
- |-----trip_point_2_temp: 70
- |-----trip_point_2_type: active[0]
- |-----trip_point_3_temp: 60
- |-----trip_point_3_type: active[1]
+ |-----trip_point_2_temp: 70000
+ |-----trip_point_2_type: active0
+ |-----trip_point_3_temp: 60000
+ |-----trip_point_3_type: active1
|-----cdev0: --->/sys/class/thermal/cooling_device0
|-----cdev0_trip_point: 1 /* cdev0 can be used for passive */
|-----cdev1: --->/sys/class/thermal/cooling_device3
W: http://www.lesswatts.org/projects/acpi/
S: Maintained
+AD1889 ALSA SOUND DRIVER
+P: Kyle McMartin
+M: kyle@parisc-linux.org
+P: Thibaut Varene
+M: T-Bone@parisc-linux.org
+W: http://wiki.parisc-linux.org/AD1889
+L: linux-parisc@vger.kernel.org
+S: Maintained
+
ADM1025 HARDWARE MONITOR DRIVER
P: Jean Delvare
M: khali@linux-fr.org
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 25
-EXTRAVERSION = -rc5
+EXTRAVERSION = -rc6
NAME = Funky Weasel is Jiggy wit it
# *DOCUMENTATION*
config PCI_SYSCALL
def_bool PCI
+config IOMMU_HELPER
+ def_bool PCI
+
config ALPHA_CORE_AGP
bool
depends on ALPHA_GENERIC || ALPHA_TITAN || ALPHA_MARVEL
#include <linux/scatterlist.h>
#include <linux/log2.h>
#include <linux/dma-mapping.h>
+#include <linux/iommu-helper.h>
#include <asm/io.h>
#include <asm/hwrpb.h>
return iommu_arena_new_node(0, hose, base, window_size, align);
}
-static inline int is_span_boundary(unsigned int index, unsigned int nr,
- unsigned long shift,
- unsigned long boundary_size)
-{
- shift = (shift + index) & (boundary_size - 1);
- return shift + nr > boundary_size;
-}
-
/* Must be called with the arena lock held */
static long
iommu_arena_find_pages(struct device *dev, struct pci_iommu_arena *arena,
base = arena->dma_base >> PAGE_SHIFT;
if (dev) {
boundary_size = dma_get_seg_boundary(dev) + 1;
- BUG_ON(!is_power_of_2(boundary_size));
boundary_size >>= PAGE_SHIFT;
} else {
boundary_size = 1UL << (32 - PAGE_SHIFT);
again:
while (i < n && p+i < nent) {
- if (!i && is_span_boundary(p, n, base, boundary_size)) {
+ if (!i && iommu_is_span_boundary(p, n, base, boundary_size)) {
p = ALIGN(p + 1, mask + 1);
goto again;
}
# Modified for PA-RISC Linux by Paul Lahaie, Alex deVries,
# Mike Shaver, Helge Deller and Martin K. Petersen
#
+
+KBUILD_DEFCONFIG := default_defconfig
+
NM = sh $(srctree)/arch/parisc/nm
CHECKFLAGS += -D__hppa__=1
spin_unlock_irqrestore(&pdc_lock, flags);
}
+/* locked by pdc_console_lock */
+static int __attribute__((aligned(8))) iodc_retbuf[32];
+static char __attribute__((aligned(64))) iodc_dbuf[4096];
/**
* pdc_iodc_print - Console print using IODC.
* Since the HP console requires CR+LF to perform a 'newline', we translate
* "\n" to "\r\n".
*/
-int pdc_iodc_print(unsigned char *str, unsigned count)
+int pdc_iodc_print(const unsigned char *str, unsigned count)
{
- /* XXX Should we spinlock posx usage */
static int posx; /* for simple TAB-Simulation... */
- int __attribute__((aligned(8))) iodc_retbuf[32];
- char __attribute__((aligned(64))) iodc_dbuf[4096];
unsigned int i;
unsigned long flags;
- memset(iodc_dbuf, 0, 4096);
- for (i = 0; i < count && i < 2048;) {
+ for (i = 0; i < count && i < 79;) {
switch(str[i]) {
case '\n':
iodc_dbuf[i+0] = '\r';
iodc_dbuf[i+1] = '\n';
i += 2;
posx = 0;
- break;
+ goto print;
case '\t':
while (posx & 7) {
iodc_dbuf[i] = ' ';
}
}
+ /* if we're at the end of line, and not already inserting a newline,
+ * insert one anyway. iodc console doesn't claim to support >79 char
+ * lines. don't account for this in the return value.
+ */
+ if (i == 79 && iodc_dbuf[i-1] != '\n') {
+ iodc_dbuf[i+0] = '\r';
+ iodc_dbuf[i+1] = '\n';
+ }
+
+print:
spin_lock_irqsave(&pdc_lock, flags);
real32_call(PAGE0->mem_cons.iodc_io,
(unsigned long)PAGE0->mem_cons.hpa, ENTRY_IO_COUT,
*/
int pdc_iodc_getc(void)
{
- unsigned long flags;
- static int __attribute__((aligned(8))) iodc_retbuf[32];
- static char __attribute__((aligned(64))) iodc_dbuf[4096];
int ch;
int status;
+ unsigned long flags;
/* Bail if no console input device. */
if (!PAGE0->mem_kbd.iodc_io)
{HPHW_NPROC,0x887,0x4,0x91,"Storm Peak Slow"},
{HPHW_NPROC,0x888,0x4,0x91,"Storm Peak Fast DC-"},
{HPHW_NPROC,0x889,0x4,0x91,"Storm Peak Fast"},
- {HPHW_NPROC,0x88A,0x4,0x91,"Crestone Peak"},
+ {HPHW_NPROC,0x88A,0x4,0x91,"Crestone Peak Slow"},
+ {HPHW_NPROC,0x88C,0x4,0x91,"Orca Mako+"},
+ {HPHW_NPROC,0x88D,0x4,0x91,"Rainier/Medel Mako+ Slow"},
+ {HPHW_NPROC,0x88E,0x4,0x91,"Rainier/Medel Mako+ Fast"},
+ {HPHW_NPROC,0x894,0x4,0x91,"Mt. Hamilton Fast Mako+"},
+ {HPHW_NPROC,0x895,0x4,0x91,"Storm Peak Slow Mako+"},
+ {HPHW_NPROC,0x896,0x4,0x91,"Storm Peak Fast Mako+"},
+ {HPHW_NPROC,0x897,0x4,0x91,"Storm Peak DC- Slow Mako+"},
+ {HPHW_NPROC,0x898,0x4,0x91,"Storm Peak DC- Fast Mako+"},
+ {HPHW_NPROC,0x899,0x4,0x91,"Mt. Hamilton Slow Mako+"},
+ {HPHW_NPROC,0x89B,0x4,0x91,"Crestone Peak Mako+ Slow"},
+ {HPHW_NPROC,0x89C,0x4,0x91,"Crestone Peak Mako+ Fast"},
{HPHW_A_DIRECT, 0x004, 0x0000D, 0x00, "Arrakis MUX"},
{HPHW_A_DIRECT, 0x005, 0x0000D, 0x00, "Dyun Kiuh MUX"},
{HPHW_A_DIRECT, 0x006, 0x0000D, 0x00, "Baat Kiuh AP/MUX (40299B)"},
#include <asm/pgtable.h>
#include <linux/linkage.h>
+#include <linux/init.h>
.level LEVEL
- .data
+ __INITDATA
ENTRY(boot_args)
.word 0 /* arg0 */
.word 0 /* arg1 */
.word 0 /* arg3 */
END(boot_args)
- .text
+ .section .text.head
.align 4
.import init_thread_union,data
.import fault_vector_20,code /* IVA parisc 2.0 32 bit */
ENDPROC(stext)
#ifndef CONFIG_64BIT
- .data
+ .section .data.read_mostly
.align 4
.export $global$,data
#include <linux/tty.h>
#include <asm/pdc.h> /* for iodc_call() proto and friends */
+static spinlock_t pdc_console_lock = SPIN_LOCK_UNLOCKED;
static void pdc_console_write(struct console *co, const char *s, unsigned count)
{
- pdc_iodc_print(s, count);
+ int i = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pdc_console_lock, flags);
+ do {
+ i += pdc_iodc_print(s + i, count - i);
+ } while (i < count);
+ spin_unlock_irqrestore(&pdc_console_lock, flags);
}
-void pdc_printf(const char *fmt, ...)
+int pdc_console_poll_key(struct console *co)
{
- va_list args;
- char buf[1024];
- int i, len;
-
- va_start(args, fmt);
- len = vscnprintf(buf, sizeof(buf), fmt, args);
- va_end(args);
+ int c;
+ unsigned long flags;
- pdc_iodc_print(buf, len);
-}
+ spin_lock_irqsave(&pdc_console_lock, flags);
+ c = pdc_iodc_getc();
+ spin_unlock_irqrestore(&pdc_console_lock, flags);
-int pdc_console_poll_key(struct console *co)
-{
- return pdc_iodc_getc();
+ return c;
}
static int pdc_console_setup(struct console *co, char *options)
ENTRY_COMP(kexec_load) /* 300 */
ENTRY_COMP(utimensat)
ENTRY_COMP(signalfd)
- ENTRY_COMP(timerfd)
+ ENTRY_SAME(ni_syscall) /* was timerfd */
ENTRY_SAME(eventfd)
ENTRY_COMP(fallocate) /* 305 */
+ ENTRY_SAME(timerfd_create)
+ ENTRY_COMP(timerfd_settime)
+ ENTRY_COMP(timerfd_gettime)
/* Nothing yet */
DEFINE_SPINLOCK(pa_dbit_lock);
#endif
+void parisc_show_stack(struct task_struct *t, unsigned long *sp,
+ struct pt_regs *regs);
+
static int printbinary(char *buf, unsigned long x, int nbits)
{
unsigned long mask = 1UL << (nbits - 1);
print_symbol(" IAOQ[1]: %s\n", regs->iaoq[1]);
printk(level);
print_symbol(" RP(r2): %s\n", regs->gr[2]);
+
+ parisc_show_stack(current, NULL, regs);
}
printk("\n");
}
-void show_stack(struct task_struct *task, unsigned long *s)
+void parisc_show_stack(struct task_struct *task, unsigned long *sp,
+ struct pt_regs *regs)
{
struct unwind_frame_info info;
+ struct task_struct *t;
+
+ t = task ? task : current;
+ if (regs) {
+ unwind_frame_init(&info, t, regs);
+ goto show_stack;
+ }
- if (!task) {
+ if (t == current) {
unsigned long sp;
HERE:
unwind_frame_init(&info, current, &r);
}
} else {
- unwind_frame_init_from_blocked_task(&info, task);
+ unwind_frame_init_from_blocked_task(&info, t);
}
+show_stack:
do_show_stack(&info);
}
+void show_stack(struct task_struct *t, unsigned long *sp)
+{
+ return parisc_show_stack(t, sp, NULL);
+}
+
int is_valid_bugaddr(unsigned long iaoq)
{
return 1;
machines with more than one CPU.
In order to use APM, you will need supporting software. For location
- and more information, read <file:Documentation/pm.txt> and the
+ and more information, read <file:Documentation/power/pm.txt> and the
Battery Powered Linux mini-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
}
if (tx) {
- pr_debug("%s: (async) len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: (async) len: %zu\n", __func__, len);
async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
} else {
void *dest_buf, *src_buf;
- pr_debug("%s: (sync) len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: (sync) len: %zu\n", __func__, len);
/* wait for any prerequisite operations */
if (depend_tx) {
BUG_ON(depend_tx->ack);
if (dma_wait_for_async_tx(depend_tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for depend_tx\n",
- __FUNCTION__);
+ __func__);
}
dest_buf = kmap_atomic(dest, KM_USER0) + dest_offset;
}
if (tx) {
- pr_debug("%s: (async) len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: (async) len: %zu\n", __func__, len);
async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
} else { /* run the memset synchronously */
void *dest_buf;
- pr_debug("%s: (sync) len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: (sync) len: %zu\n", __func__, len);
dest_buf = (void *) (((char *) page_address(dest)) + offset);
BUG_ON(depend_tx->ack);
if (dma_wait_for_async_tx(depend_tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for depend_tx\n",
- __FUNCTION__);
+ __func__);
}
memset(dest_buf, val, len);
tx = NULL;
if (tx) {
- pr_debug("%s: (async)\n", __FUNCTION__);
+ pr_debug("%s: (async)\n", __func__);
async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
} else {
- pr_debug("%s: (sync)\n", __FUNCTION__);
+ pr_debug("%s: (sync)\n", __func__);
/* wait for any prerequisite operations */
if (depend_tx) {
BUG_ON(depend_tx->ack);
if (dma_wait_for_async_tx(depend_tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for depend_tx\n",
- __FUNCTION__);
+ __func__);
}
async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
int i;
unsigned long dma_prep_flags = cb_fn ? DMA_PREP_INTERRUPT : 0;
- pr_debug("%s: len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: len: %zu\n", __func__, len);
dma_dest = dma_map_page(device->dev, dest, offset, len,
DMA_FROM_DEVICE);
void *_dest;
int i;
- pr_debug("%s: len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: len: %zu\n", __func__, len);
/* reuse the 'src_list' array to convert to buffer pointers */
for (i = 0; i < src_cnt; i++)
DMA_ERROR)
panic("%s: DMA_ERROR waiting for "
"depend_tx\n",
- __FUNCTION__);
+ __func__);
}
do_sync_xor(dest, &src_list[src_off], offset,
unsigned long dma_prep_flags = cb_fn ? DMA_PREP_INTERRUPT : 0;
int i;
- pr_debug("%s: (async) len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: (async) len: %zu\n", __func__, len);
for (i = 0; i < src_cnt; i++)
dma_src[i] = dma_map_page(device->dev, src_list[i],
} else {
unsigned long xor_flags = flags;
- pr_debug("%s: (sync) len: %zu\n", __FUNCTION__, len);
+ pr_debug("%s: (sync) len: %zu\n", __func__, len);
xor_flags |= ASYNC_TX_XOR_DROP_DST;
xor_flags &= ~ASYNC_TX_ACK;
if (tx) {
if (dma_wait_for_async_tx(tx) == DMA_ERROR)
panic("%s: DMA_ERROR waiting for tx\n",
- __FUNCTION__);
+ __func__);
async_tx_ack(tx);
}
If you have a legacy free Toshiba laptop (such as the Libretto L1
series), say Y.
-config ACPI_CUSTOM_DSDT
- bool "Include Custom DSDT"
+config ACPI_CUSTOM_DSDT_FILE
+ string "Custom DSDT Table file to include"
+ default ""
depends on !STANDALONE
- default n
help
This option supports a custom DSDT by linking it into the kernel.
See Documentation/acpi/dsdt-override.txt
- If unsure, say N.
-
-config ACPI_CUSTOM_DSDT_FILE
- string "Custom DSDT Table file to include"
- depends on ACPI_CUSTOM_DSDT
- default ""
- help
Enter the full path name to the file which includes the AmlCode
declaration.
-config ACPI_CUSTOM_DSDT_INITRD
- bool "Read Custom DSDT from initramfs"
- depends on BLK_DEV_INITRD
- default n
- help
- This option supports a custom DSDT by optionally loading it from initrd.
- See Documentation/acpi/dsdt-override.txt
+ If unsure, don't enter a file name.
- If you are not using this feature now, but may use it later,
- it is safe to say Y here.
+config ACPI_CUSTOM_DSDT
+ bool
+ default ACPI_CUSTOM_DSDT_FILE != ""
config ACPI_BLACKLIST_YEAR
int "Disable ACPI for systems before Jan 1st this year" if X86_32
acpi_kobj = kobject_create_and_add("acpi", firmware_kobj);
if (!acpi_kobj) {
- printk(KERN_WARNING "%s: kset create error\n", __FUNCTION__);
+ printk(KERN_WARNING "%s: kset create error\n", __func__);
acpi_kobj = NULL;
}
input->phys = button->phys;
input->id.bustype = BUS_HOST;
input->id.product = button->type;
+ input->dev.parent = &device->dev;
switch (button->type) {
case ACPI_BUTTON_TYPE_POWER:
struct mutex lock;
wait_queue_head_t wait;
struct list_head list;
+ atomic_t irq_count;
u8 handlers_installed;
} *boot_ec, *first_ec;
{
int ret = 0;
+ atomic_set(&ec->irq_count, 0);
+
if (unlikely(event == ACPI_EC_EVENT_OBF_1 &&
test_bit(EC_FLAGS_NO_OBF1_GPE, &ec->flags)))
force_poll = 1;
while (time_before(jiffies, delay)) {
if (acpi_ec_check_status(ec, event))
goto end;
+ msleep(5);
}
}
pr_err(PREFIX "acpi_ec_wait timeout,"
struct acpi_ec *ec = data;
pr_debug(PREFIX "~~~> interrupt\n");
+ atomic_inc(&ec->irq_count);
+ if (atomic_read(&ec->irq_count) > 5) {
+ pr_err(PREFIX "GPE storm detected, disabling EC GPE\n");
+ acpi_disable_gpe(NULL, ec->gpe, ACPI_ISR);
+ clear_bit(EC_FLAGS_GPE_MODE, &ec->flags);
+ return ACPI_INTERRUPT_HANDLED;
+ }
clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags);
if (test_bit(EC_FLAGS_GPE_MODE, &ec->flags))
wake_up(&ec->wait);
boot_ec->command_addr = ecdt_ptr->control.address;
boot_ec->data_addr = ecdt_ptr->data.address;
boot_ec->gpe = ecdt_ptr->gpe;
- if (ACPI_FAILURE(acpi_get_handle(NULL, ecdt_ptr->id,
- &boot_ec->handle))) {
- pr_info("Failed to locate handle for boot EC\n");
- boot_ec->handle = ACPI_ROOT_OBJECT;
- }
+ boot_ec->handle = ACPI_ROOT_OBJECT;
} else {
/* This workaround is needed only on some broken machines,
* which require early EC, but fail to provide ECDT */
#define OSI_STRING_LENGTH_MAX 64 /* arbitrary */
static char osi_additional_string[OSI_STRING_LENGTH_MAX];
-#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
-static int acpi_no_initrd_override;
-#endif
-
/*
* "Ode to _OSI(Linux)"
*
return AE_OK;
}
-#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
-static struct acpi_table_header *acpi_find_dsdt_initrd(void)
-{
- struct file *firmware_file;
- mm_segment_t oldfs;
- unsigned long len, len2;
- struct acpi_table_header *dsdt_buffer, *ret = NULL;
- struct kstat stat;
- char *ramfs_dsdt_name = "/DSDT.aml";
-
- printk(KERN_INFO PREFIX "Checking initramfs for custom DSDT\n");
-
- /*
- * Never do this at home, only the user-space is allowed to open a file.
- * The clean way would be to use the firmware loader.
- * But this code must be run before there is any userspace available.
- * A static/init firmware infrastructure doesn't exist yet...
- */
- if (vfs_stat(ramfs_dsdt_name, &stat) < 0)
- return ret;
-
- len = stat.size;
- /* check especially against empty files */
- if (len <= 4) {
- printk(KERN_ERR PREFIX "Failed: DSDT only %lu bytes.\n", len);
- return ret;
- }
-
- firmware_file = filp_open(ramfs_dsdt_name, O_RDONLY, 0);
- if (IS_ERR(firmware_file)) {
- printk(KERN_ERR PREFIX "Failed to open %s.\n", ramfs_dsdt_name);
- return ret;
- }
-
- dsdt_buffer = kmalloc(len, GFP_ATOMIC);
- if (!dsdt_buffer) {
- printk(KERN_ERR PREFIX "Failed to allocate %lu bytes.\n", len);
- goto err;
- }
-
- oldfs = get_fs();
- set_fs(KERNEL_DS);
- len2 = vfs_read(firmware_file, (char __user *)dsdt_buffer, len,
- &firmware_file->f_pos);
- set_fs(oldfs);
- if (len2 < len) {
- printk(KERN_ERR PREFIX "Failed to read %lu bytes from %s.\n",
- len, ramfs_dsdt_name);
- ACPI_FREE(dsdt_buffer);
- goto err;
- }
-
- printk(KERN_INFO PREFIX "Found %lu byte DSDT in %s.\n",
- len, ramfs_dsdt_name);
- ret = dsdt_buffer;
-err:
- filp_close(firmware_file, NULL);
- return ret;
-}
-#endif
-
acpi_status
acpi_os_table_override(struct acpi_table_header * existing_table,
struct acpi_table_header ** new_table)
#ifdef CONFIG_ACPI_CUSTOM_DSDT
if (strncmp(existing_table->signature, "DSDT", 4) == 0)
*new_table = (struct acpi_table_header *)AmlCode;
-#endif
-#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
- if ((strncmp(existing_table->signature, "DSDT", 4) == 0) &&
- !acpi_no_initrd_override) {
- struct acpi_table_header *initrd_table;
-
- initrd_table = acpi_find_dsdt_initrd();
- if (initrd_table)
- *new_table = initrd_table;
- }
#endif
if (*new_table != NULL) {
printk(KERN_WARNING PREFIX "Override [%4.4s-%8.8s], "
return AE_OK;
}
-#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
-static int __init acpi_no_initrd_override_setup(char *s)
-{
- acpi_no_initrd_override = 1;
- return 1;
-}
-__setup("acpi_no_initrd_override", acpi_no_initrd_override_setup);
-#endif
-
static irqreturn_t acpi_irq(int irq, void *dev_id)
{
u32 handled;
if (clash) {
if (acpi_enforce_resources != ENFORCE_RESOURCES_NO) {
- printk(KERN_INFO "%sACPI: %s resource %s [0x%llx-0x%llx]"
+ printk("%sACPI: %s resource %s [0x%llx-0x%llx]"
" conflicts with ACPI region %s"
" [0x%llx-0x%llx]\n",
acpi_enforce_resources == ENFORCE_RESOURCES_LAX
*/
+#include <linux/dmi.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
return NULL;
}
+/* http://bugzilla.kernel.org/show_bug.cgi?id=4773 */
+static struct dmi_system_id medion_md9580[] = {
+ {
+ .ident = "Medion MD9580-F laptop",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "MEDIONNB"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "A555"),
+ },
+ },
+ { }
+};
+
+/* http://bugzilla.kernel.org/show_bug.cgi?id=5044 */
+static struct dmi_system_id dell_optiplex[] = {
+ {
+ .ident = "Dell Optiplex GX1",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex GX1 600S+"),
+ },
+ },
+ { }
+};
+
+/* http://bugzilla.kernel.org/show_bug.cgi?id=10138 */
+static struct dmi_system_id hp_t5710[] = {
+ {
+ .ident = "HP t5710",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "hp t5000 series"),
+ DMI_MATCH(DMI_BOARD_NAME, "098Ch"),
+ },
+ },
+ { }
+};
+
+struct prt_quirk {
+ struct dmi_system_id *system;
+ unsigned int segment;
+ unsigned int bus;
+ unsigned int device;
+ unsigned char pin;
+ char *source; /* according to BIOS */
+ char *actual_source;
+};
+
+/*
+ * These systems have incorrect _PRT entries. The BIOS claims the PCI
+ * interrupt at the listed segment/bus/device/pin is connected to the first
+ * link device, but it is actually connected to the second.
+ */
+static struct prt_quirk prt_quirks[] = {
+ { medion_md9580, 0, 0, 9, 'A',
+ "\\_SB_.PCI0.ISA.LNKA",
+ "\\_SB_.PCI0.ISA.LNKB"},
+ { dell_optiplex, 0, 0, 0xd, 'A',
+ "\\_SB_.LNKB",
+ "\\_SB_.LNKA"},
+ { hp_t5710, 0, 0, 1, 'A',
+ "\\_SB_.PCI0.LNK1",
+ "\\_SB_.PCI0.LNK3"},
+};
+
+static void
+do_prt_fixups(struct acpi_prt_entry *entry, struct acpi_pci_routing_table *prt)
+{
+ int i;
+ struct prt_quirk *quirk;
+
+ for (i = 0; i < ARRAY_SIZE(prt_quirks); i++) {
+ quirk = &prt_quirks[i];
+
+ /* All current quirks involve link devices, not GSIs */
+ if (!prt->source)
+ continue;
+
+ if (dmi_check_system(quirk->system) &&
+ entry->id.segment == quirk->segment &&
+ entry->id.bus == quirk->bus &&
+ entry->id.device == quirk->device &&
+ entry->pin + 'A' == quirk->pin &&
+ !strcmp(prt->source, quirk->source) &&
+ strlen(prt->source) >= strlen(quirk->actual_source)) {
+ printk(KERN_WARNING PREFIX "firmware reports "
+ "%04x:%02x:%02x[%c] connected to %s; "
+ "changing to %s\n",
+ entry->id.segment, entry->id.bus,
+ entry->id.device, 'A' + entry->pin,
+ prt->source, quirk->actual_source);
+ strcpy(prt->source, quirk->actual_source);
+ }
+ }
+}
+
static int
acpi_pci_irq_add_entry(acpi_handle handle,
int segment, int bus, struct acpi_pci_routing_table *prt)
entry->id.function = prt->address & 0xFFFF;
entry->pin = prt->pin;
+ do_prt_fixups(entry, prt);
+
/*
* Type 1: Dynamic
* ---------------
}
}
-static int acpi_pci_root_add(struct acpi_device *device)
+static int __devinit acpi_pci_root_add(struct acpi_device *device)
{
int result = 0;
struct acpi_pci_root *root = NULL;
status = acpi_evaluate_integer(handle, "_STA", NULL, &sta);
- /*
- * if a processor object does not have an _STA object,
- * OSPM assumes that the processor is present.
- */
- if (status == AE_NOT_FOUND)
- return 1;
if (ACPI_SUCCESS(status) && (sta & ACPI_STA_DEVICE_PRESENT))
return 1;
- ACPI_EXCEPTION((AE_INFO, status, "Processor Device is not present"));
+ /*
+ * _STA is mandatory for a processor that supports hot plug
+ */
+ if (status == AE_NOT_FOUND)
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Processor does not support hot plug\n"));
+ else
+ ACPI_EXCEPTION((AE_INFO, status,
+ "Processor Device is not present"));
return 0;
}
return 0;
}
-static void
-acpi_processor_hotplug_notify(acpi_handle handle, u32 event, void *data)
+static void __ref acpi_processor_hotplug_notify(acpi_handle handle,
+ u32 event, void *data)
{
struct acpi_processor *pr;
struct acpi_device *device = NULL;
switch (event) {
case ACPI_NOTIFY_BUS_CHECK:
case ACPI_NOTIFY_DEVICE_CHECK:
- printk("Processor driver received %s event\n",
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Processor driver received %s event\n",
(event == ACPI_NOTIFY_BUS_CHECK) ?
- "ACPI_NOTIFY_BUS_CHECK" : "ACPI_NOTIFY_DEVICE_CHECK");
+ "ACPI_NOTIFY_BUS_CHECK" : "ACPI_NOTIFY_DEVICE_CHECK"));
if (!is_processor_present(handle))
break;
status = acpi_evaluate_object(handle, "_EJD", NULL, &buffer);
if (ACPI_SUCCESS(status)) {
obj = buffer.pointer;
- status = acpi_get_handle(NULL, obj->string.pointer, ejd);
+ status = acpi_get_handle(ACPI_ROOT_OBJECT, obj->string.pointer,
+ ejd);
kfree(buffer.pointer);
}
return status;
case ACPI_BUS_TYPE_DEVICE:
status = acpi_get_object_info(handle, &buffer);
if (ACPI_FAILURE(status)) {
- printk(KERN_ERR PREFIX "%s: Error reading device info\n", __FUNCTION__);
+ printk(KERN_ERR PREFIX "%s: Error reading device info\n", __func__);
return;
}
static void acpi_power_off(void)
{
/* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */
- printk("%s called\n", __FUNCTION__);
+ printk("%s called\n", __func__);
local_irq_disable();
acpi_enable_wakeup_device(ACPI_STATE_S5);
acpi_enter_sleep_state(ACPI_STATE_S5);
goto fail;
for (i = 0; i < num_counters; ++i) {
- char buffer[10];
+ char buffer[12];
char *name;
if (i < num_gpes)
}
/* sys I/F for generic thermal sysfs support */
+#define KELVIN_TO_MILLICELSIUS(t) (t * 100 - 273200)
+
static int thermal_get_temp(struct thermal_zone_device *thermal, char *buf)
{
struct acpi_thermal *tz = thermal->devdata;
if (!tz)
return -EINVAL;
- return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(tz->temperature));
+ return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(tz->temperature));
}
static const char enabled[] = "kernel";
if (tz->trips.critical.flags.valid) {
if (!trip)
- return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
+ return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.critical.temperature));
trip--;
}
if (tz->trips.hot.flags.valid) {
if (!trip)
- return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
+ return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.hot.temperature));
trip--;
}
if (tz->trips.passive.flags.valid) {
if (!trip)
- return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
+ return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.passive.temperature));
trip--;
}
for (i = 0; i < ACPI_THERMAL_MAX_ACTIVE &&
tz->trips.active[i].flags.valid; i++) {
if (!trip)
- return sprintf(buf, "%ld\n", KELVIN_TO_CELSIUS(
+ return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(
tz->trips.active[i].temperature));
trip--;
}
#define HCI_VIDEO_OUT_CRT 0x2
#define HCI_VIDEO_OUT_TV 0x4
+static const struct acpi_device_id toshiba_device_ids[] = {
+ {"TOS6200", 0},
+ {"TOS1900", 0},
+ {"", 0},
+};
+MODULE_DEVICE_TABLE(acpi, toshiba_device_ids);
+
/* utility
*/
* RETURN: Updated pointer to the function name
*
* DESCRIPTION: Remove the "Acpi" prefix from the function name, if present.
- * This allows compiler macros such as __FUNCTION__ to be used
+ * This allows compiler macros such as __func__ to be used
* with no change to the debug output.
*
******************************************************************************/
* element -- which is legal)
*/
if (!internal_object) {
- *obj_length = 0;
+ *obj_length = sizeof(union acpi_object);
return_ACPI_STATUS(AE_OK);
}
break;
}
+ if (!element->reference.handle) {
+ printk(KERN_WARNING PREFIX "Invalid reference in"
+ " package %s\n", pathname);
+ status = AE_NULL_ENTRY;
+ break;
+ }
/* Get the acpi_handle. */
list->handles[i] = element->reference.handle;
kfree(obj);
- if (device->cap._BCL && device->cap._BCM && device->cap._BQC && max_level > 0){
+ if (device->cap._BCL && device->cap._BCM && max_level > 0) {
int result;
static int count = 0;
char *name;
if (!video)
goto end;
- printk(KERN_INFO PREFIX "Please implement %s\n", __FUNCTION__);
+ printk(KERN_INFO PREFIX "Please implement %s\n", __func__);
seq_printf(seq, "<TODO>\n");
end:
{
struct guid_block *block = NULL;
struct wmi_block *wblock = NULL;
- acpi_handle handle;
+ acpi_handle handle, wc_handle;
acpi_status status, wc_status = AE_ERROR;
struct acpi_object_list input, wc_input;
union acpi_object wc_params[1], wq_params[1];
* expensive, but have no corresponding WCxx method. So we
* should not fail if this happens.
*/
- wc_status = acpi_evaluate_object(handle, wc_method,
- &wc_input, NULL);
+ wc_status = acpi_get_handle(handle, wc_method, &wc_handle);
+ if (ACPI_SUCCESS(wc_status))
+ wc_status = acpi_evaluate_object(handle, wc_method,
+ &wc_input, NULL);
}
strcpy(method, "WQ");
* If ACPI_WMI_EXPENSIVE, call the relevant WCxx method, even if
* the WQxx method failed - we should disable collection anyway.
*/
- if ((block->flags & ACPI_WMI_EXPENSIVE) && wc_status) {
+ if ((block->flags & ACPI_WMI_EXPENSIVE) && ACPI_SUCCESS(wc_status)) {
wc_params[0].integer.value = 0;
status = acpi_evaluate_object(handle,
wc_method, &wc_input, NULL);
*/
static DEFINE_SPINLOCK(floppy_lock);
-static struct completion device_release;
static unsigned short virtual_dma_port = 0x3f0;
irqreturn_t floppy_interrupt(int irq, void *dev_id);
static void floppy_device_release(struct device *dev)
{
- complete(&device_release);
}
static struct platform_device floppy_device[N_DRIVE];
{
int drive;
- init_completion(&device_release);
blk_unregister_region(MKDEV(FLOPPY_MAJOR, 0), 256);
unregister_blkdev(FLOPPY_MAJOR, "fd");
/* eject disk, if any */
fd_eject(0);
-
- wait_for_completion(&device_release);
}
module_param(floppy, charp, 0);
if (iobase || iobase1 || iobase2 || iobase3) {
for(i = 0; i < RC_NBOARD; i++)
- rc_board[0].base = 0;
+ rc_board[i].base = 0;
}
if (iobase)
!device->device_prep_dma_zero_sum);
BUG_ON(dma_has_cap(DMA_MEMSET, device->cap_mask) &&
!device->device_prep_dma_memset);
- BUG_ON(dma_has_cap(DMA_ZERO_SUM, device->cap_mask) &&
+ BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) &&
!device->device_prep_dma_interrupt);
BUG_ON(!device->device_alloc_chan_resources);
}
-static void set_sr(struct fsl_dma_chan *fsl_chan, dma_addr_t val)
+static void set_sr(struct fsl_dma_chan *fsl_chan, u32 val)
{
DMA_OUT(fsl_chan, &fsl_chan->reg_base->sr, val, 32);
}
-static dma_addr_t get_sr(struct fsl_dma_chan *fsl_chan)
+static u32 get_sr(struct fsl_dma_chan *fsl_chan)
{
return DMA_IN(fsl_chan, &fsl_chan->reg_base->sr, 32);
}
dma_pool_destroy(fsl_chan->desc_pool);
}
+static struct dma_async_tx_descriptor *
+fsl_dma_prep_interrupt(struct dma_chan *chan)
+{
+ struct fsl_dma_chan *fsl_chan;
+ struct fsl_desc_sw *new;
+
+ if (!chan)
+ return NULL;
+
+ fsl_chan = to_fsl_chan(chan);
+
+ new = fsl_dma_alloc_descriptor(fsl_chan);
+ if (!new) {
+ dev_err(fsl_chan->dev, "No free memory for link descriptor\n");
+ return NULL;
+ }
+
+ new->async_tx.cookie = -EBUSY;
+ new->async_tx.ack = 0;
+
+ /* Set End-of-link to the last link descriptor of new list*/
+ set_ld_eol(fsl_chan, new);
+
+ return &new->async_tx;
+}
+
static struct dma_async_tx_descriptor *fsl_dma_prep_memcpy(
struct dma_chan *chan, dma_addr_t dma_dest, dma_addr_t dma_src,
size_t len, unsigned long flags)
dev_dbg(fsl_chan->dev, "new link desc alloc %p\n", new);
#endif
- copy = min(len, FSL_DMA_BCR_MAX_CNT);
+ copy = min(len, (size_t)FSL_DMA_BCR_MAX_CNT);
set_desc_cnt(fsl_chan, &new->hw, copy);
set_desc_src(fsl_chan, &new->hw, dma_src);
spin_lock_irqsave(&fsl_chan->desc_lock, flags);
- fsl_dma_update_completed_cookie(fsl_chan);
dev_dbg(fsl_chan->dev, "chan completed_cookie = %d\n",
fsl_chan->completed_cookie);
list_for_each_entry_safe(desc, _desc, &fsl_chan->ld_queue, node) {
if (ld_node != &fsl_chan->ld_queue) {
/* Get the ld start address from ld_queue */
next_dest_addr = to_fsl_desc(ld_node)->async_tx.phys;
- dev_dbg(fsl_chan->dev, "xfer LDs staring from 0x%016llx\n",
- (u64)next_dest_addr);
+ dev_dbg(fsl_chan->dev, "xfer LDs staring from %p\n",
+ (void *)next_dest_addr);
set_cdar(fsl_chan, next_dest_addr);
dma_start(fsl_chan);
} else {
static irqreturn_t fsl_dma_chan_do_interrupt(int irq, void *data)
{
struct fsl_dma_chan *fsl_chan = (struct fsl_dma_chan *)data;
- dma_addr_t stat;
+ u32 stat;
stat = get_sr(fsl_chan);
dev_dbg(fsl_chan->dev, "event: channel %d, stat = 0x%x\n",
*/
if (stat & FSL_DMA_SR_EOSI) {
dev_dbg(fsl_chan->dev, "event: End-of-segments INT\n");
- dev_dbg(fsl_chan->dev, "event: clndar 0x%016llx, "
- "nlndar 0x%016llx\n", (u64)get_cdar(fsl_chan),
- (u64)get_ndar(fsl_chan));
+ dev_dbg(fsl_chan->dev, "event: clndar %p, nlndar %p\n",
+ (void *)get_cdar(fsl_chan), (void *)get_ndar(fsl_chan));
stat &= ~FSL_DMA_SR_EOSI;
+ fsl_dma_update_completed_cookie(fsl_chan);
}
/* If it current transfer is the end-of-transfer,
fsl_chan_ld_cleanup(fsl_chan);
}
+#ifdef FSL_DMA_CALLBACKTEST
static void fsl_dma_callback_test(struct fsl_dma_chan *fsl_chan)
{
if (fsl_chan)
dev_info(fsl_chan->dev, "selftest: callback is ok!\n");
}
+#endif
+#ifdef CONFIG_FSL_DMA_SELFTEST
static int fsl_dma_self_test(struct fsl_dma_chan *fsl_chan)
{
struct dma_chan *chan;
if (err) {
for (i = 0; (*(src + i) == *(dest + i)) && (i < test_size);
i++);
- dev_err(fsl_chan->dev, "selftest: Test failed, data %d/%d is "
+ dev_err(fsl_chan->dev, "selftest: Test failed, data %d/%ld is "
"error! src 0x%x, dest 0x%x\n",
- i, test_size, *(src + i), *(dest + i));
+ i, (long)test_size, *(src + i), *(dest + i));
}
free_resources:
kfree(src);
return err;
}
+#endif
static int __devinit of_fsl_dma_chan_probe(struct of_device *dev,
const struct of_device_id *match)
}
dev_info(&dev->dev, "Probe the Freescale DMA driver for %s "
- "controller at 0x%08x...\n",
- match->compatible, fdev->reg.start);
+ "controller at %p...\n",
+ match->compatible, (void *)fdev->reg.start);
fdev->reg_base = ioremap(fdev->reg.start, fdev->reg.end
- fdev->reg.start + 1);
dma_cap_set(DMA_INTERRUPT, fdev->common.cap_mask);
fdev->common.device_alloc_chan_resources = fsl_dma_alloc_chan_resources;
fdev->common.device_free_chan_resources = fsl_dma_free_chan_resources;
+ fdev->common.device_prep_dma_interrupt = fsl_dma_prep_interrupt;
fdev->common.device_prep_dma_memcpy = fsl_dma_prep_memcpy;
fdev->common.device_is_tx_complete = fsl_dma_is_complete;
fdev->common.device_issue_pending = fsl_dma_memcpy_issue_pending;
int busy = iop_chan_is_busy(iop_chan);
int seen_current = 0, slot_cnt = 0, slots_per_op = 0;
- dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
/* free completed slots from the chain starting with
* the oldest descriptor
*/
spin_unlock_bh(&iop_chan->lock);
dev_dbg(iop_chan->device->common.dev, "%s cookie: %d slot: %d\n",
- __FUNCTION__, sw_desc->async_tx.cookie, sw_desc->idx);
+ __func__, sw_desc->async_tx.cookie, sw_desc->idx);
return cookie;
}
struct iop_adma_desc_slot *sw_desc, *grp_start;
int slot_cnt, slots_per_op;
- dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_interrupt_slot_count(&slots_per_op, iop_chan);
BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT));
dev_dbg(iop_chan->device->common.dev, "%s len: %u\n",
- __FUNCTION__, len);
+ __func__, len);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_memcpy_slot_count(len, &slots_per_op);
BUG_ON(unlikely(len > IOP_ADMA_MAX_BYTE_COUNT));
dev_dbg(iop_chan->device->common.dev, "%s len: %u\n",
- __FUNCTION__, len);
+ __func__, len);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_memset_slot_count(len, &slots_per_op);
dev_dbg(iop_chan->device->common.dev,
"%s src_cnt: %d len: %u flags: %lx\n",
- __FUNCTION__, src_cnt, len, flags);
+ __func__, src_cnt, len, flags);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_xor_slot_count(len, src_cnt, &slots_per_op);
return NULL;
dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %u\n",
- __FUNCTION__, src_cnt, len);
+ __func__, src_cnt, len);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_zero_sum_slot_count(len, src_cnt, &slots_per_op);
iop_desc_set_zero_sum_byte_count(grp_start, len);
grp_start->xor_check_result = result;
pr_debug("\t%s: grp_start->xor_check_result: %p\n",
- __FUNCTION__, grp_start->xor_check_result);
+ __func__, grp_start->xor_check_result);
sw_desc->unmap_src_cnt = src_cnt;
sw_desc->unmap_len = len;
while (src_cnt--)
iop_chan->last_used = NULL;
dev_dbg(iop_chan->device->common.dev, "%s slots_allocated %d\n",
- __FUNCTION__, iop_chan->slots_allocated);
+ __func__, iop_chan->slots_allocated);
spin_unlock_bh(&iop_chan->lock);
/* one is ok since we left it on there on purpose */
{
struct iop_adma_chan *chan = data;
- dev_dbg(chan->device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(chan->device->common.dev, "%s\n", __func__);
tasklet_schedule(&chan->irq_tasklet);
{
struct iop_adma_chan *chan = data;
- dev_dbg(chan->device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(chan->device->common.dev, "%s\n", __func__);
tasklet_schedule(&chan->irq_tasklet);
int err = 0;
struct iop_adma_chan *iop_chan;
- dev_dbg(device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(device->common.dev, "%s\n", __func__);
src = kzalloc(sizeof(u8) * IOP_ADMA_TEST_SIZE, GFP_KERNEL);
if (!src)
int err = 0;
struct iop_adma_chan *iop_chan;
- dev_dbg(device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(device->common.dev, "%s\n", __func__);
for (src_idx = 0; src_idx < IOP_ADMA_NUM_SRC_TEST; src_idx++) {
xor_srcs[src_idx] = alloc_page(GFP_KERNEL);
}
dev_dbg(&pdev->dev, "%s: allocted descriptor pool virt %p phys %p\n",
- __FUNCTION__, adev->dma_desc_pool_virt,
+ __func__, adev->dma_desc_pool_virt,
(void *) adev->dma_desc_pool);
adev->id = plat_data->hw_id;
dma_cookie_t cookie;
int slot_cnt, slots_per_op;
- dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_memcpy_slot_count(0, &slots_per_op);
dma_cookie_t cookie;
int slot_cnt, slots_per_op;
- dev_dbg(iop_chan->device->common.dev, "%s\n", __FUNCTION__);
+ dev_dbg(iop_chan->device->common.dev, "%s\n", __func__);
spin_lock_bh(&iop_chan->lock);
slot_cnt = iop_chan_xor_slot_count(0, 2, &slots_per_op);
-# -*- shell-script -*-
-
comment "An alternative FireWire stack is available with EXPERIMENTAL=y"
depends on EXPERIMENTAL=n
NOTE:
You should only build ONE of the stacks, unless you REALLY know what
- you are doing. If you install both, you should configure them only as
- modules rather than link them statically, and you should blacklist one
- of the concurrent low-level drivers in /etc/modprobe.conf. Add either
-
- blacklist firewire-ohci
- or
- blacklist ohci1394
-
- there depending on which driver you DON'T want to have auto-loaded.
- You can optionally do the same with the other IEEE 1394/ FireWire
- drivers.
-
- If you have an old modprobe which doesn't implement the blacklist
- directive, use either
-
- install firewire-ohci /bin/true
- or
- install ohci1394 /bin/true
-
- and so on, depending on which modules you DON't want to have
- auto-loaded.
+ you are doing.
config FIREWIRE_OHCI
tristate "Support for OHCI FireWire host controllers"
NOTE:
- If you also build ohci1394 of the classic stack, blacklist either
- ohci1394 or firewire-ohci to let hotplug load only the desired driver.
+ You should only build ohci1394 or firewire-ohci, but not both.
+ If you nevertheless want to install both, you should configure them
+ only as modules and blacklist the driver(s) which you don't want to
+ have auto-loaded. Add either
+
+ blacklist firewire-ohci
+ or
+ blacklist ohci1394
+ blacklist video1394
+ blacklist dv1394
+
+ to /etc/modprobe.conf or /etc/modprobe.d/* and update modprobe.conf
+ depending on your distribution. The latter two modules should be
+ blacklisted together with ohci1394 because they depend on ohci1394.
+
+ If you have an old modprobe which doesn't implement the blacklist
+ directive, use "install modulename /bin/true" for the modules to be
+ blacklisted.
config FIREWIRE_SBP2
tristate "Support for storage devices (SBP-2 protocol driver)"
You should also enable support for disks, CD-ROMs, etc. in the SCSI
configuration section.
-
- NOTE:
-
- If you also build sbp2 of the classic stack, blacklist either sbp2
- or firewire-sbp2 to let hotplug load only the desired driver.
-
#include <asm/page.h>
#include <asm/system.h>
+#ifdef CONFIG_PPC_PMAC
+#include <asm/pmac_feature.h>
+#endif
+
#include "fw-ohci.h"
#include "fw-transaction.h"
int generation;
int request_generation;
u32 bus_seconds;
+ bool old_uninorth;
/*
* Spinlock for accessing fw_ohci data. Never call out of
{
struct device *dev = ctx->ohci->card.device;
struct ar_buffer *ab;
- dma_addr_t ab_bus;
+ dma_addr_t uninitialized_var(ab_bus);
size_t offset;
- ab = (struct ar_buffer *) __get_free_page(GFP_ATOMIC);
+ ab = dma_alloc_coherent(dev, PAGE_SIZE, &ab_bus, GFP_ATOMIC);
if (ab == NULL)
return -ENOMEM;
- ab_bus = dma_map_single(dev, ab, PAGE_SIZE, DMA_BIDIRECTIONAL);
- if (dma_mapping_error(ab_bus)) {
- free_page((unsigned long) ab);
- return -ENOMEM;
- }
-
memset(&ab->descriptor, 0, sizeof(ab->descriptor));
ab->descriptor.control = cpu_to_le16(DESCRIPTOR_INPUT_MORE |
DESCRIPTOR_STATUS |
ab->descriptor.res_count = cpu_to_le16(PAGE_SIZE - offset);
ab->descriptor.branch_address = 0;
- dma_sync_single_for_device(dev, ab_bus, PAGE_SIZE, DMA_BIDIRECTIONAL);
-
ctx->last_buffer->descriptor.branch_address = cpu_to_le32(ab_bus | 1);
ctx->last_buffer->next = ab;
ctx->last_buffer = ab;
return 0;
}
+#if defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)
+#define cond_le32_to_cpu(v) \
+ (ohci->old_uninorth ? (__force __u32)(v) : le32_to_cpu(v))
+#else
+#define cond_le32_to_cpu(v) le32_to_cpu(v)
+#endif
+
static __le32 *handle_ar_packet(struct ar_context *ctx, __le32 *buffer)
{
struct fw_ohci *ohci = ctx->ohci;
struct fw_packet p;
u32 status, length, tcode;
- p.header[0] = le32_to_cpu(buffer[0]);
- p.header[1] = le32_to_cpu(buffer[1]);
- p.header[2] = le32_to_cpu(buffer[2]);
+ p.header[0] = cond_le32_to_cpu(buffer[0]);
+ p.header[1] = cond_le32_to_cpu(buffer[1]);
+ p.header[2] = cond_le32_to_cpu(buffer[2]);
tcode = (p.header[0] >> 4) & 0x0f;
switch (tcode) {
break;
case TCODE_READ_BLOCK_REQUEST :
- p.header[3] = le32_to_cpu(buffer[3]);
+ p.header[3] = cond_le32_to_cpu(buffer[3]);
p.header_length = 16;
p.payload_length = 0;
break;
case TCODE_READ_BLOCK_RESPONSE:
case TCODE_LOCK_REQUEST:
case TCODE_LOCK_RESPONSE:
- p.header[3] = le32_to_cpu(buffer[3]);
+ p.header[3] = cond_le32_to_cpu(buffer[3]);
p.header_length = 16;
p.payload_length = p.header[3] >> 16;
break;
/* FIXME: What to do about evt_* errors? */
length = (p.header_length + p.payload_length + 3) / 4;
- status = le32_to_cpu(buffer[length]);
+ status = cond_le32_to_cpu(buffer[length]);
p.ack = ((status >> 16) & 0x1f) - 16;
p.speed = (status >> 21) & 0x7;
*/
if (p.ack + 16 == 0x09)
- ohci->request_generation = (buffer[2] >> 16) & 0xff;
+ ohci->request_generation = (p.header[2] >> 16) & 0xff;
else if (ctx == &ohci->ar_request_ctx)
fw_core_handle_request(&ohci->card, &p);
else
if (d->res_count == 0) {
size_t size, rest, offset;
+ dma_addr_t buffer_bus;
/*
* This descriptor is finished and we may have a
*/
offset = offsetof(struct ar_buffer, data);
- dma_unmap_single(ohci->card.device,
- le32_to_cpu(ab->descriptor.data_address) - offset,
- PAGE_SIZE, DMA_BIDIRECTIONAL);
+ buffer_bus = le32_to_cpu(ab->descriptor.data_address) - offset;
buffer = ab;
ab = ab->next;
while (buffer < end)
buffer = handle_ar_packet(ctx, buffer);
- free_page((unsigned long)buffer);
+ dma_free_coherent(ohci->card.device, PAGE_SIZE,
+ buffer, buffer_bus);
ar_context_add_page(ctx);
} else {
buffer = ctx->pointer;
context_add_buffer(struct context *ctx)
{
struct descriptor_buffer *desc;
- dma_addr_t bus_addr;
+ dma_addr_t uninitialized_var(bus_addr);
int offset;
/*
*/
self_id_count = (reg_read(ohci, OHCI1394_SelfIDCount) >> 3) & 0x3ff;
- generation = (le32_to_cpu(ohci->self_id_cpu[0]) >> 16) & 0xff;
+ generation = (cond_le32_to_cpu(ohci->self_id_cpu[0]) >> 16) & 0xff;
rmb();
for (i = 1, j = 0; j < self_id_count; i += 2, j++) {
if (ohci->self_id_cpu[i] != ~ohci->self_id_cpu[i + 1])
fw_error("inconsistent self IDs\n");
- ohci->self_id_buffer[j] = le32_to_cpu(ohci->self_id_cpu[i]);
+ ohci->self_id_buffer[j] =
+ cond_le32_to_cpu(ohci->self_id_cpu[i]);
}
rmb();
unsigned long flags;
int retval = -EBUSY;
__be32 *next_config_rom;
- dma_addr_t next_config_rom_bus;
+ dma_addr_t uninitialized_var(next_config_rom_bus);
ohci = fw_ohci(card);
void *p, *end;
int i;
- if (db->first_res_count > 0 && db->second_res_count > 0) {
+ if (db->first_res_count != 0 && db->second_res_count != 0) {
if (ctx->excess_bytes <= le16_to_cpu(db->second_req_count)) {
/* This descriptor isn't done yet, stop iteration. */
return 0;
memcpy(ctx->header + i + 4, p + 8, ctx->base.header_size - 4);
i += ctx->base.header_size;
ctx->excess_bytes +=
- (le32_to_cpu(*(u32 *)(p + 4)) >> 16) & 0xffff;
+ (le32_to_cpu(*(__le32 *)(p + 4)) >> 16) & 0xffff;
p += ctx->base.header_size + 4;
}
ctx->header_length = i;
int err;
size_t size;
+#ifdef CONFIG_PPC_PMAC
+ /* Necessary on some machines if fw-ohci was loaded/ unloaded before */
+ if (machine_is(powermac)) {
+ struct device_node *ofn = pci_device_to_OF_node(dev);
+
+ if (ofn) {
+ pmac_call_feature(PMAC_FTR_1394_CABLE_POWER, ofn, 0, 1);
+ pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 1);
+ }
+ }
+#endif /* CONFIG_PPC_PMAC */
+
ohci = kzalloc(sizeof(*ohci), GFP_KERNEL);
if (ohci == NULL) {
fw_error("Could not malloc fw_ohci data.\n");
pci_write_config_dword(dev, OHCI1394_PCI_HCI_Control, 0);
pci_set_drvdata(dev, ohci);
+#if defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32)
+ ohci->old_uninorth = dev->vendor == PCI_VENDOR_ID_APPLE &&
+ dev->device == PCI_DEVICE_ID_APPLE_UNI_N_FW;
+#endif
spin_lock_init(&ohci->lock);
tasklet_init(&ohci->bus_reset_tasklet,
pci_disable_device(dev);
fw_card_put(&ohci->card);
+#ifdef CONFIG_PPC_PMAC
+ /* On UniNorth, power down the cable and turn off the chip clock
+ * to save power on laptops */
+ if (machine_is(powermac)) {
+ struct device_node *ofn = pci_device_to_OF_node(dev);
+
+ if (ofn) {
+ pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 0);
+ pmac_call_feature(PMAC_FTR_1394_CABLE_POWER, ofn, 0, 0);
+ }
+ }
+#endif /* CONFIG_PPC_PMAC */
+
fw_notify("Removed fw-ohci device.\n");
}
if (err)
fw_error("pci_set_power_state failed with %d\n", err);
+/* PowerMac suspend code comes last */
+#ifdef CONFIG_PPC_PMAC
+ if (machine_is(powermac)) {
+ struct device_node *ofn = pci_device_to_OF_node(pdev);
+
+ if (ofn)
+ pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 0);
+ }
+#endif /* CONFIG_PPC_PMAC */
+
return 0;
}
struct fw_ohci *ohci = pci_get_drvdata(pdev);
int err;
+/* PowerMac resume code comes first */
+#ifdef CONFIG_PPC_PMAC
+ if (machine_is(powermac)) {
+ struct device_node *ofn = pci_device_to_OF_node(pdev);
+
+ if (ofn)
+ pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 1);
+ }
+#endif /* CONFIG_PPC_PMAC */
+
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
err = pci_enable_device(pdev);
#define SBP2_ORB_TIMEOUT 2000U /* Timeout in ms */
#define SBP2_ORB_NULL 0x80000000
#define SBP2_MAX_SG_ELEMENT_LENGTH 0xf000
+#define SBP2_RETRY_LIMIT 0xf /* 15 retries */
#define SBP2_DIRECTION_TO_MEDIA 0x0
#define SBP2_DIRECTION_FROM_MEDIA 0x1
.model = ~0,
.workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
},
+ /* Datafab MD2-FW2 with Symbios/LSILogic SYM13FW500 bridge */ {
+ .firmware_revision = 0x002600,
+ .model = ~0,
+ .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
+ },
/*
* There are iPods (2nd gen, 3rd gen) with model_id == 0, but
kref_put(&tgt->kref, sbp2_release_target);
}
+static void
+complete_set_busy_timeout(struct fw_card *card, int rcode,
+ void *payload, size_t length, void *done)
+{
+ complete(done);
+}
+
+static void sbp2_set_busy_timeout(struct sbp2_logical_unit *lu)
+{
+ struct fw_device *device = fw_device(lu->tgt->unit->device.parent);
+ DECLARE_COMPLETION_ONSTACK(done);
+ struct fw_transaction t;
+ static __be32 busy_timeout;
+
+ /* FIXME: we should try to set dual-phase cycle_limit too */
+ busy_timeout = cpu_to_be32(SBP2_RETRY_LIMIT);
+
+ fw_send_request(device->card, &t, TCODE_WRITE_QUADLET_REQUEST,
+ lu->tgt->node_id, lu->generation, device->max_speed,
+ CSR_REGISTER_BASE + CSR_BUSY_TIMEOUT, &busy_timeout,
+ sizeof(busy_timeout), complete_set_busy_timeout, &done);
+ wait_for_completion(&done);
+}
+
static void sbp2_reconnect(struct work_struct *work);
static void sbp2_login(struct work_struct *work)
fw_notify("%s: logged in to LUN %04x (%d retries)\n",
tgt->bus_id, lu->lun, lu->retries);
-#if 0
- /* FIXME: The linux1394 sbp2 does this last step. */
- sbp2_set_busy_timeout(scsi_id);
-#endif
+ /* set appropriate retry limit(s) in BUSY_TIMEOUT register */
+ sbp2_set_busy_timeout(lu);
PREPARE_DELAYED_WORK(&lu->work, sbp2_reconnect);
sbp2_agent_reset(lu);
#include <linux/module.h>
#include <linux/wait.h>
#include <linux/errno.h>
+#include <asm/bug.h>
#include <asm/system.h>
#include "fw-transaction.h"
#include "fw-topology.h"
node1 = fw_node(list1.next);
while (&node0->link != &list0) {
+ WARN_ON(node0->port_count != node1->port_count);
- /* assert(node0->port_count == node1->port_count); */
if (node0->link_on && !node1->link_on)
event = FW_NODE_LINK_OFF;
else if (!node0->link_on && node1->link_on)
void *payload, size_t length, void *callback_data)
{
int i, start, end;
- u32 *map;
+ __be32 *map;
if (!TCODE_IS_READ_REQUEST(tcode)) {
fw_send_response(card, request, RCODE_TYPE_ERROR);
static inline void
fw_memcpy_from_be32(void *_dst, void *_src, size_t size)
{
- u32 *dst = _dst;
- u32 *src = _src;
+ u32 *dst = _dst;
+ __be32 *src = _src;
int i;
for (i = 0; i < size / 4; i++)
- dst[i] = cpu_to_be32(src[i]);
+ dst[i] = be32_to_cpu(src[i]);
}
static inline void
.model_id = SBP2_ROM_VALUE_WILDCARD,
.workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
},
+ /* Datafab MD2-FW2 with Symbios/LSILogic SYM13FW500 bridge */ {
+ .firmware_revision = 0x002600,
+ .model_id = SBP2_ROM_VALUE_WILDCARD,
+ .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
+ },
/* iPod 4th generation */ {
.firmware_revision = 0x0a2700,
.model_id = 0x000021,
#define IPATH_IB_LINKDOWN 0
#define IPATH_IB_LINKARM 1
#define IPATH_IB_LINKACTIVE 2
-#define IPATH_IB_LINKINIT 3
+#define IPATH_IB_LINKDOWN_ONLY 3
#define IPATH_IB_LINKDOWN_SLEEP 4
#define IPATH_IB_LINKDOWN_DISABLE 5
#define IPATH_IB_LINK_LOOPBACK 6 /* enable local loopback */
* -ETIMEDOUT state can have multiple states set, for any of several
* transitions.
*/
-static int ipath_wait_linkstate(struct ipath_devdata *dd, u32 state,
- int msecs)
+int ipath_wait_linkstate(struct ipath_devdata *dd, u32 state, int msecs)
{
dd->ipath_state_wanted = state;
wait_event_interruptible_timeout(ipath_state_wait,
static void ipath_set_ib_lstate(struct ipath_devdata *dd, int which)
{
static const char *what[4] = {
- [0] = "DOWN",
- [INFINIPATH_IBCC_LINKCMD_INIT] = "INIT",
+ [0] = "NOP",
+ [INFINIPATH_IBCC_LINKCMD_DOWN] = "DOWN",
[INFINIPATH_IBCC_LINKCMD_ARMED] = "ARMED",
[INFINIPATH_IBCC_LINKCMD_ACTIVE] = "ACTIVE"
};
(dd, dd->ipath_kregs->kr_ibcstatus) >>
INFINIPATH_IBCS_LINKTRAININGSTATE_SHIFT) &
INFINIPATH_IBCS_LINKTRAININGSTATE_MASK]);
- /* flush all queued sends when going to DOWN or INIT, to be sure that
+ /* flush all queued sends when going to DOWN to be sure that
* they don't block MAD packets */
- if (!linkcmd || linkcmd == INFINIPATH_IBCC_LINKCMD_INIT)
+ if (linkcmd == INFINIPATH_IBCC_LINKCMD_DOWN)
ipath_cancel_sends(dd, 1);
ipath_write_kreg(dd, dd->ipath_kregs->kr_ibcctrl,
int ret;
switch (newstate) {
+ case IPATH_IB_LINKDOWN_ONLY:
+ ipath_set_ib_lstate(dd, INFINIPATH_IBCC_LINKCMD_DOWN <<
+ INFINIPATH_IBCC_LINKCMD_SHIFT);
+ /* don't wait */
+ ret = 0;
+ goto bail;
+
case IPATH_IB_LINKDOWN:
ipath_set_ib_lstate(dd, INFINIPATH_IBCC_LINKINITCMD_POLL <<
INFINIPATH_IBCC_LINKINITCMD_SHIFT);
ret = 0;
goto bail;
- case IPATH_IB_LINKINIT:
- if (dd->ipath_flags & IPATH_LINKINIT) {
- ret = 0;
- goto bail;
- }
- ipath_set_ib_lstate(dd, INFINIPATH_IBCC_LINKCMD_INIT <<
- INFINIPATH_IBCC_LINKCMD_SHIFT);
- lstate = IPATH_LINKINIT;
- break;
-
case IPATH_IB_LINKARM:
if (dd->ipath_flags & IPATH_LINKARMED) {
ret = 0;
int ipath_setrcvhdrsize(struct ipath_devdata *, unsigned);
int ipath_reset_device(int);
void ipath_get_faststats(unsigned long);
+int ipath_wait_linkstate(struct ipath_devdata *, u32, int);
int ipath_set_linkstate(struct ipath_devdata *, u8);
int ipath_set_mtu(struct ipath_devdata *, u16);
int ipath_set_lid(struct ipath_devdata *, u32, u8);
/* FALLTHROUGH */
case IB_PORT_DOWN:
if (lstate == 0)
- if (get_linkdowndefaultstate(dd))
- lstate = IPATH_IB_LINKDOWN_SLEEP;
- else
- lstate = IPATH_IB_LINKDOWN;
+ lstate = IPATH_IB_LINKDOWN_ONLY;
else if (lstate == 1)
lstate = IPATH_IB_LINKDOWN_SLEEP;
else if (lstate == 2)
else
goto err;
ipath_set_linkstate(dd, lstate);
+ ipath_wait_linkstate(dd, IPATH_LINKINIT | IPATH_LINKARMED |
+ IPATH_LINKACTIVE, 1000);
break;
case IB_PORT_ARMED:
ipath_set_linkstate(dd, IPATH_IB_LINKARM);
/**
* ipath_reset_qp - initialize the QP state to the reset state
* @qp: the QP to reset
+ * @type: the QP type
*/
-static void ipath_reset_qp(struct ipath_qp *qp)
+static void ipath_reset_qp(struct ipath_qp *qp, enum ib_qp_type type)
{
qp->remote_qpn = 0;
qp->qkey = 0;
qp->s_psn = 0;
qp->r_psn = 0;
qp->r_msn = 0;
- if (qp->ibqp.qp_type == IB_QPT_RC) {
+ if (type == IB_QPT_RC) {
qp->s_state = IB_OPCODE_RC_SEND_LAST;
qp->r_state = IB_OPCODE_RC_SEND_LAST;
} else {
wc.wr_id = qp->r_wr_id;
wc.opcode = IB_WC_RECV;
wc.status = err;
- ipath_cq_enter(to_icq(qp->ibqp.send_cq), &wc, 1);
+ ipath_cq_enter(to_icq(qp->ibqp.recv_cq), &wc, 1);
}
wc.status = IB_WC_WR_FLUSH_ERR;
switch (new_state) {
case IB_QPS_RESET:
- ipath_reset_qp(qp);
+ ipath_reset_qp(qp, ibqp->qp_type);
break;
case IB_QPS_ERR:
attr->port_num = 1;
attr->timeout = qp->timeout;
attr->retry_cnt = qp->s_retry_cnt;
- attr->rnr_retry = qp->s_rnr_retry;
+ attr->rnr_retry = qp->s_rnr_retry_cnt;
attr->alt_port_num = 0;
attr->alt_timeout = 0;
goto bail_qp;
}
qp->ip = NULL;
- ipath_reset_qp(qp);
+ ipath_reset_qp(qp, init_attr->qp_type);
break;
default:
list_move_tail(&qp->timerwait,
&dev->pending[dev->pending_index]);
spin_unlock(&dev->pending_lock);
+
+ if (opcode == OP(RDMA_READ_RESPONSE_MIDDLE))
+ qp->s_retry = qp->s_retry_cnt;
+
/*
* Update the RDMA receive state but do the copy w/o
* holding the locks and blocking interrupts.
#define INFINIPATH_IBCC_LINKINITCMD_SLEEP 3
#define INFINIPATH_IBCC_LINKINITCMD_SHIFT 16
#define INFINIPATH_IBCC_LINKCMD_MASK 0x3ULL
-#define INFINIPATH_IBCC_LINKCMD_INIT 1 /* move to 0x11 */
+#define INFINIPATH_IBCC_LINKCMD_DOWN 1 /* move to 0x11 */
#define INFINIPATH_IBCC_LINKCMD_ARMED 2 /* move to 0x21 */
#define INFINIPATH_IBCC_LINKCMD_ACTIVE 3 /* move to 0x31 */
#define INFINIPATH_IBCC_LINKCMD_SHIFT 18
#include <net/icmp.h>
#include <linux/icmpv6.h>
#include <linux/delay.h>
+#include <linux/vmalloc.h>
#include "ipoib.h"
priv->tx_sge[0].addr = addr;
priv->tx_sge[0].length = len;
+ priv->tx_wr.num_sge = 1;
priv->tx_wr.wr_id = wr_id | IPOIB_OP_CM;
return ib_post_send(tx->qp, &priv->tx_wr, &bad_wr);
struct ipoib_dev_priv *priv = netdev_priv(p->dev);
int ret;
- p->tx_ring = kzalloc(ipoib_sendq_size * sizeof *p->tx_ring,
- GFP_KERNEL);
+ p->tx_ring = vmalloc(ipoib_sendq_size * sizeof *p->tx_ring);
if (!p->tx_ring) {
ipoib_warn(priv, "failed to allocate tx ring\n");
ret = -ENOMEM;
goto err_tx;
}
+ memset(p->tx_ring, 0, ipoib_sendq_size * sizeof *p->tx_ring);
p->qp = ipoib_cm_create_tx_qp(p->dev, p);
if (IS_ERR(p->qp)) {
ib_destroy_qp(p->qp);
err_qp:
p->qp = NULL;
+ vfree(p->tx_ring);
err_tx:
return ret;
}
if (p->qp)
ib_destroy_qp(p->qp);
- kfree(p->tx_ring);
+ vfree(p->tx_ring);
kfree(p);
}
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/kernel.h>
+#include <linux/vmalloc.h>
#include <linux/if_arp.h> /* For ARPHRD_xxx */
goto out;
}
- priv->tx_ring = kzalloc(ipoib_sendq_size * sizeof *priv->tx_ring,
- GFP_KERNEL);
+ priv->tx_ring = vmalloc(ipoib_sendq_size * sizeof *priv->tx_ring);
if (!priv->tx_ring) {
printk(KERN_WARNING "%s: failed to allocate TX ring (%d entries)\n",
ca->name, ipoib_sendq_size);
goto out_rx_ring_cleanup;
}
+ memset(priv->tx_ring, 0, ipoib_sendq_size * sizeof *priv->tx_ring);
/* priv->tx_head, tx_tail & tx_outstanding are already 0 */
return 0;
out_tx_ring_cleanup:
- kfree(priv->tx_ring);
+ vfree(priv->tx_ring);
out_rx_ring_cleanup:
kfree(priv->rx_ring);
ipoib_ib_dev_cleanup(dev);
kfree(priv->rx_ring);
- kfree(priv->tx_ring);
+ vfree(priv->tx_ring);
priv->rx_ring = NULL;
priv->tx_ring = NULL;
*/
spin_lock(&priv->lock);
- if (!test_bit(IPOIB_MCAST_STARTED, &priv->flags) ||
+ if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags) ||
!priv->broadcast ||
!test_bit(IPOIB_MCAST_FLAG_ATTACHED, &priv->broadcast->flags)) {
++dev->stats.tx_dropped;
depends on ACPI
depends on LEDS_CLASS
depends on BACKLIGHT_CLASS_DEVICE
+ depends on SERIO_I8042
select ACPI_WMI
---help---
This is a driver for newer Acer (and Wistron) laptops. It adds
},
.driver_data = &quirk_acer_travelmate_2490,
},
+ {
+ .callback = dmi_matched,
+ .ident = "Acer Aspire 3610",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 3610"),
+ },
+ .driver_data = &quirk_acer_travelmate_2490,
+ },
{
.callback = dmi_matched,
.ident = "Acer Aspire 5100",
},
.driver_data = &quirk_acer_travelmate_2490,
},
+ {
+ .callback = dmi_matched,
+ .ident = "Acer Aspire 5610",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5610"),
+ },
+ .driver_data = &quirk_acer_travelmate_2490,
+ },
{
.callback = dmi_matched,
.ident = "Acer Aspire 5630",
}
static struct led_classdev mail_led = {
- .name = "acer-mail:green",
+ .name = "acer-wmi::mail",
.brightness_set = mail_led_set,
};
-static int __init acer_led_init(struct device *dev)
+static int __devinit acer_led_init(struct device *dev)
{
return led_classdev_register(dev, &mail_led);
}
.update_status = update_bl_status,
};
-static int __init acer_backlight_init(struct device *dev)
+static int __devinit acer_backlight_init(struct device *dev)
{
struct backlight_device *bd;
return 0;
}
-static void __exit acer_backlight_exit(void)
+static void acer_backlight_exit(void)
{
backlight_device_unregister(acer_backlight_device);
}
if (wmi_has_guid(WMID_GUID2) && interface) {
if (ACPI_FAILURE(WMID_set_capabilities())) {
- printk(ACER_ERR "Unable to detect available devices\n");
+ printk(ACER_ERR "Unable to detect available WMID "
+ "devices\n");
return -ENODEV;
}
} else if (!wmi_has_guid(WMID_GUID2) && interface) {
- printk(ACER_ERR "Unable to detect available devices\n");
+ printk(ACER_ERR "No WMID device detection method found\n");
return -ENODEV;
}
interface = &AMW0_interface;
if (ACPI_FAILURE(AMW0_set_capabilities())) {
- printk(ACER_ERR "Unable to detect available devices\n");
+ printk(ACER_ERR "Unable to detect available AMW0 "
+ "devices\n");
return -ENODEV;
}
}
- if (wmi_has_guid(AMW0_GUID1)) {
- if (ACPI_FAILURE(AMW0_find_mailled()))
- printk(ACER_ERR "Unable to detect mail LED\n");
- }
+ if (wmi_has_guid(AMW0_GUID1))
+ AMW0_find_mailled();
find_quirks();
if (!interface) {
- printk(ACER_ERR "No or unsupported WMI interface, unable to ");
- printk(KERN_CONT "load.\n");
+ printk(ACER_ERR "No or unsupported WMI interface, unable to "
+ "load\n");
return -ENODEV;
}
break;
default:
- if (event > ARRAY_SIZE(sony_laptop_input_index)) {
+ if (event >= ARRAY_SIZE(sony_laptop_input_index)) {
dprintk("sony_laptop_report_input_event, event not known: %d\n", event);
break;
}
host->sg_pos++;
if (host->sg_pos == host->sg_len) {
if ((r_data->flags & MMC_DATA_WRITE)
- && DATA_CARRY)
+ && (host->cmd_flags & DATA_CARRY))
writel(host->bounce_buf_data[0],
host->dev->addr
+ SOCK_MMCSD_DATA);
struct kobj_attribute *attr,
const char *buf, size_t count)
{
- return pdcs_auto_write(kset, attr, buf, count, PF_AUTOBOOT);
+ return pdcs_auto_write(kobj, attr, buf, count, PF_AUTOBOOT);
}
/**
struct kobj_attribute *attr,
const char *buf, size_t count)
{
- return pdcs_auto_write(kset, attr, buf, count, PF_AUTOSEARCH);
+ return pdcs_auto_write(kobj, attr, buf, count, PF_AUTOSEARCH);
}
/**
}
/* Don't forget the root entries */
- error = sysfs_create_group(stable_kobj, pdcs_attr_group);
+ error = sysfs_create_group(stable_kobj, &pdcs_attr_group);
/* register the paths kset as a child of the stable kset */
paths_kset = kset_create_and_add("paths", NULL, stable_kobj);
#define RESMAP_MASK(n) (~0UL << (BITS_PER_LONG - (n)))
#define RESMAP_IDX_MASK (sizeof(unsigned long) - 1)
-unsigned long ptr_to_pide(struct ioc *ioc, unsigned long *res_ptr,
- unsigned int bitshiftcnt)
+static unsigned long ptr_to_pide(struct ioc *ioc, unsigned long *res_ptr,
+ unsigned int bitshiftcnt)
{
return (((unsigned long)res_ptr - (unsigned long)ioc->res_map) << 3)
+ bitshiftcnt;
/* register the bus with sysfs as the parent is now
* properly registered. */
child_bus = dev->subordinate;
+ if (child_bus->is_added)
+ continue;
child_bus->dev.parent = child_bus->bridge;
retval = device_register(&child_bus->dev);
if (retval)
dev_err(&dev->dev, "Error registering pci_bus,"
" continuing...\n");
- else
+ else {
+ child_bus->is_added = 1;
retval = device_create_file(&child_bus->dev,
&dev_attr_cpuaffinity);
+ }
if (retval)
dev_err(&dev->dev, "Error creating cpuaffinity"
" file, continuing...\n");
{
acpi_handle handle = DEVICE_ACPI_HANDLE(&dev->dev);
acpi_handle tmp;
- static int state_conv[] = {
- [0] = 0,
- [1] = 1,
- [2] = 2,
- [3] = 3,
- [4] = 3
+ static const u8 state_conv[] = {
+ [PCI_D0] = ACPI_STATE_D0,
+ [PCI_D1] = ACPI_STATE_D1,
+ [PCI_D2] = ACPI_STATE_D2,
+ [PCI_D3hot] = ACPI_STATE_D3,
+ [PCI_D3cold] = ACPI_STATE_D3
};
- int acpi_state = state_conv[(int __force) state];
if (!handle)
return -ENODEV;
/* If the ACPI device has _EJ0, ignore the device */
if (ACPI_SUCCESS(acpi_get_handle(handle, "_EJ0", &tmp)))
return 0;
- return acpi_bus_set_power(handle, acpi_state);
+
+ switch (state) {
+ case PCI_D0:
+ case PCI_D1:
+ case PCI_D2:
+ case PCI_D3hot:
+ case PCI_D3cold:
+ return acpi_bus_set_power(handle, state_conv[state]);
+ }
+ return -EINVAL;
}
static void au1550_spi_bits_handlers_set(struct au1550_spi *hw, int bpw);
-/**
+/*
* compute BRG and DIV bits to setup spi clock based on main input clock rate
* that was specified in platform data structure
* according to au1550 datasheet:
return hw->txrx_bufs(spi, t);
}
-static irqreturn_t au1550_spi_irq(int irq, void *dev, struct pt_regs *regs)
+static irqreturn_t au1550_spi_irq(int irq, void *dev)
{
struct au1550_spi *hw = dev;
return hw->irq_callback(hw);
t->rx_dma = t->tx_dma = 0;
status = bitbang->txrx_bufs(spi, t);
}
+ if (status > 0)
+ m->actual_length += status;
if (status != t->len) {
- if (status > 0)
- status = -EMSGSIZE;
+ /* always report some kind of error */
+ if (status >= 0)
+ status = -EREMOTEIO;
break;
}
- m->actual_length += status;
status = 0;
/* protocol tweaks before next transfer */
menuconfig THERMAL
bool "Generic Thermal sysfs driver"
+ select HWMON
default y
help
Generic Thermal Sysfs driver offers a generic mechanism for
#include <linux/idr.h>
#include <linux/thermal.h>
#include <linux/spinlock.h>
+#include <linux/hwmon.h>
+#include <linux/hwmon-sysfs.h>
-MODULE_AUTHOR("Zhang Rui")
+MODULE_AUTHOR("Zhang Rui");
MODULE_DESCRIPTION("Generic thermal management sysfs support");
MODULE_LICENSE("GPL");
static LIST_HEAD(thermal_cdev_list);
static DEFINE_MUTEX(thermal_list_lock);
+static struct device *thermal_hwmon;
+#define MAX_THERMAL_ZONES 10
+
static int get_idr(struct idr *idr, struct mutex *lock, int *id)
{
int err;
mutex_unlock(lock);
}
-/* sys I/F for thermal zone */
+/* hwmon sys I/F*/
+static ssize_t
+name_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "thermal_sys_class\n");
+}
+
+static ssize_t
+temp_input_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct thermal_zone_device *tz;
+ struct sensor_device_attribute *sensor_attr
+ = to_sensor_dev_attr(attr);
+
+ list_for_each_entry(tz, &thermal_tz_list, node)
+ if (tz->id == sensor_attr->index)
+ return tz->ops->get_temp(tz, buf);
+
+ return -ENODEV;
+}
+
+static ssize_t
+temp_crit_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct thermal_zone_device *tz;
+ struct sensor_device_attribute *sensor_attr
+ = to_sensor_dev_attr(attr);
+
+ list_for_each_entry(tz, &thermal_tz_list, node)
+ if (tz->id == sensor_attr->index)
+ return tz->ops->get_trip_temp(tz, 0, buf);
+
+ return -ENODEV;
+}
+
+static DEVICE_ATTR(name, 0444, name_show, NULL);
+static struct sensor_device_attribute sensor_attrs[] = {
+ SENSOR_ATTR(temp1_input, 0444, temp_input_show, NULL, 0),
+ SENSOR_ATTR(temp1_crit, 0444, temp_crit_show, NULL, 0),
+ SENSOR_ATTR(temp2_input, 0444, temp_input_show, NULL, 1),
+ SENSOR_ATTR(temp2_crit, 0444, temp_crit_show, NULL, 1),
+ SENSOR_ATTR(temp3_input, 0444, temp_input_show, NULL, 2),
+ SENSOR_ATTR(temp3_crit, 0444, temp_crit_show, NULL, 2),
+ SENSOR_ATTR(temp4_input, 0444, temp_input_show, NULL, 3),
+ SENSOR_ATTR(temp4_crit, 0444, temp_crit_show, NULL, 3),
+ SENSOR_ATTR(temp5_input, 0444, temp_input_show, NULL, 4),
+ SENSOR_ATTR(temp5_crit, 0444, temp_crit_show, NULL, 4),
+ SENSOR_ATTR(temp6_input, 0444, temp_input_show, NULL, 5),
+ SENSOR_ATTR(temp6_crit, 0444, temp_crit_show, NULL, 5),
+ SENSOR_ATTR(temp7_input, 0444, temp_input_show, NULL, 6),
+ SENSOR_ATTR(temp7_crit, 0444, temp_crit_show, NULL, 6),
+ SENSOR_ATTR(temp8_input, 0444, temp_input_show, NULL, 7),
+ SENSOR_ATTR(temp8_crit, 0444, temp_crit_show, NULL, 7),
+ SENSOR_ATTR(temp9_input, 0444, temp_input_show, NULL, 8),
+ SENSOR_ATTR(temp9_crit, 0444, temp_crit_show, NULL, 8),
+ SENSOR_ATTR(temp10_input, 0444, temp_input_show, NULL, 9),
+ SENSOR_ATTR(temp10_crit, 0444, temp_crit_show, NULL, 9),
+};
+
+/* thermal zone sys I/F */
#define to_thermal_zone(_dev) \
container_of(_dev, struct thermal_zone_device, device)
device_remove_file(_dev, &trip_point_attrs[_index * 2 + 1]); \
} while (0)
-/* sys I/F for cooling device */
+/* cooling device sys I/F */
#define to_cooling_device(_dev) \
container_of(_dev, struct thermal_cooling_device, device)
struct thermal_zone_device *pos;
int result;
+ if (!type)
+ return ERR_PTR(-EINVAL);
+
if (strlen(type) >= THERMAL_NAME_LENGTH)
return ERR_PTR(-EINVAL);
}
/* sys I/F */
- if (type) {
- result = device_create_file(&cdev->device, &dev_attr_cdev_type);
- if (result)
- goto unregister;
- }
+ result = device_create_file(&cdev->device, &dev_attr_cdev_type);
+ if (result)
+ goto unregister;
result = device_create_file(&cdev->device, &dev_attr_max_state);
if (result)
tz->ops->unbind(tz, cdev);
}
mutex_unlock(&thermal_list_lock);
- if (cdev->type[0])
- device_remove_file(&cdev->device, &dev_attr_cdev_type);
+
+ device_remove_file(&cdev->device, &dev_attr_cdev_type);
device_remove_file(&cdev->device, &dev_attr_max_state);
device_remove_file(&cdev->device, &dev_attr_cur_state);
int result;
int count;
+ if (!type)
+ return ERR_PTR(-EINVAL);
+
if (strlen(type) >= THERMAL_NAME_LENGTH)
return ERR_PTR(-EINVAL);
kfree(tz);
return ERR_PTR(result);
}
+ if (tz->id >= MAX_THERMAL_ZONES) {
+ printk(KERN_ERR PREFIX
+ "Too many thermal zones\n");
+ release_idr(&thermal_tz_idr, &thermal_idr_lock, tz->id);
+ kfree(tz);
+ return ERR_PTR(-EINVAL);
+ }
strcpy(tz->type, type);
tz->ops = ops;
return ERR_PTR(result);
}
- /* sys I/F */
- if (type) {
- result = device_create_file(&tz->device, &dev_attr_type);
- if (result)
- goto unregister;
+ /* hwmon sys I/F */
+ result = device_create_file(thermal_hwmon,
+ &sensor_attrs[tz->id * 2].dev_attr);
+ if (result)
+ goto unregister;
+
+ if (trips > 0) {
+ char buf[40];
+ result = tz->ops->get_trip_type(tz, 0, buf);
+ if (result > 0 && !strcmp(buf, "critical\n")) {
+ result = device_create_file(thermal_hwmon,
+ &sensor_attrs[tz->id * 2 + 1].dev_attr);
+ if (result)
+ goto unregister;
+ }
}
+ /* sys I/F */
+ result = device_create_file(&tz->device, &dev_attr_type);
+ if (result)
+ goto unregister;
+
result = device_create_file(&tz->device, &dev_attr_temp);
if (result)
goto unregister;
tz->ops->unbind(tz, cdev);
mutex_unlock(&thermal_list_lock);
- if (tz->type[0])
- device_remove_file(&tz->device, &dev_attr_type);
+ device_remove_file(thermal_hwmon,
+ &sensor_attrs[tz->id * 2].dev_attr);
+ if (tz->trips > 0) {
+ char buf[40];
+ if (tz->ops->get_trip_type(tz, 0, buf) > 0)
+ if (!strcmp(buf, "critical\n"))
+ device_remove_file(thermal_hwmon,
+ &sensor_attrs[tz->id * 2 + 1].dev_attr);
+ }
+
+ device_remove_file(&tz->device, &dev_attr_type);
device_remove_file(&tz->device, &dev_attr_temp);
if (tz->ops->get_mode)
device_remove_file(&tz->device, &dev_attr_mode);
EXPORT_SYMBOL(thermal_zone_device_unregister);
+static void thermal_exit(void)
+{
+ if (thermal_hwmon) {
+ device_remove_file(thermal_hwmon, &dev_attr_name);
+ hwmon_device_unregister(thermal_hwmon);
+ }
+ class_unregister(&thermal_class);
+ idr_destroy(&thermal_tz_idr);
+ idr_destroy(&thermal_cdev_idr);
+ mutex_destroy(&thermal_idr_lock);
+ mutex_destroy(&thermal_list_lock);
+}
+
static int __init thermal_init(void)
{
int result = 0;
mutex_destroy(&thermal_idr_lock);
mutex_destroy(&thermal_list_lock);
}
- return result;
-}
-static void __exit thermal_exit(void)
-{
- class_unregister(&thermal_class);
- idr_destroy(&thermal_tz_idr);
- idr_destroy(&thermal_cdev_idr);
- mutex_destroy(&thermal_idr_lock);
- mutex_destroy(&thermal_list_lock);
+ thermal_hwmon = hwmon_device_register(NULL);
+ if (IS_ERR(thermal_hwmon)) {
+ result = PTR_ERR(thermal_hwmon);
+ thermal_hwmon = NULL;
+ printk(KERN_ERR PREFIX
+ "unable to register hwmon device\n");
+ thermal_exit();
+ return result;
+ }
+
+ result = device_create_file(thermal_hwmon, &dev_attr_name);
+
+ return result;
}
subsys_initcall(thermal_init);
fhp->fh_dentry = dentry;
fhp->fh_export = exp;
nfsd_nr_verified++;
+ cache_get(&exp->h);
} else {
/*
* just rechecking permissions
dprintk("nfsd: fh_verify - just checking\n");
dentry = fhp->fh_dentry;
exp = fhp->fh_export;
+ cache_get(&exp->h);
/*
* Set user creds for this exportpoint; necessary even
* in the "just checking" case because this may be a
if (error)
goto out;
}
- cache_get(&exp->h);
-
error = nfsd_mode_check(rqstp, dentry->d_inode->i_mode, type);
if (error)
ret = -EACCES;
if (!ptrace_may_attach(task))
- goto out;
+ goto out_task;
ret = -EINVAL;
/* file position must be aligned */
if (*ppos % PM_ENTRY_BYTES)
- goto out;
+ goto out_task;
ret = 0;
mm = get_task_mm(task);
if (!mm)
- goto out;
+ goto out_task;
ret = -ENOMEM;
uaddr = (unsigned long)buf & PAGE_MASK;
pagecount = (PAGE_ALIGN(uend) - uaddr) / PAGE_SIZE;
pages = kmalloc(pagecount * sizeof(struct page *), GFP_KERNEL);
if (!pages)
- goto out_task;
+ goto out_mm;
down_read(¤t->mm->mmap_sem);
ret = get_user_pages(current, current->mm, uaddr, pagecount,
if (ret < 0)
goto out_free;
+ if (ret != pagecount) {
+ pagecount = ret;
+ ret = -EFAULT;
+ goto out_pages;
+ }
+
pm.out = buf;
pm.end = buf + count;
ret = pm.out - buf;
}
+out_pages:
for (; pagecount; pagecount--) {
page = pages[pagecount-1];
if (!PageReserved(page))
SetPageDirty(page);
page_cache_release(page);
}
- mmput(mm);
out_free:
kfree(pages);
+out_mm:
+ mmput(mm);
out_task:
put_task_struct(task);
out:
#define get_user(x, ptr) \
({ \
int __gu_err = 0; \
- uint32_t __gu_val = 0; \
+ typeof(*(ptr)) __gu_val = *ptr; \
switch (sizeof(*(ptr))) { \
case 1: \
case 2: \
case 4: \
- __gu_val = *(ptr); \
- break; \
- case 8: \
- memcpy(&__gu_val, ptr, sizeof (*(ptr))); \
+ case 8: \
break; \
default: \
- __gu_val = 0; \
__gu_err = __get_user_bad(); \
+ __gu_val = 0; \
break; \
} \
- (x) = (typeof(*(ptr)))__gu_val; \
+ (x) = __gu_val; \
__gu_err; \
})
#define __get_user(x, ptr) get_user(x, ptr)
/*
* The following definitions are those for 32-bit ELF binaries on a 32-bit
* kernel and for 64-bit binaries on a 64-bit kernel. To run 32-bit binaries
- * on a 64-bit kernel, arch/parisc64/kernel/binfmt_elf32.c defines these
+ * on a 64-bit kernel, arch/parisc/kernel/binfmt_elf32.c defines these
* macros appropriately and then #includes binfmt_elf.c, which then includes
* this file.
*/
* Note that this header file is used by default in fs/binfmt_elf.c. So
* the following macros are for the default case. However, for the 64
* bit kernel we also support 32 bit parisc binaries. To do that
- * arch/parisc64/kernel/binfmt_elf32.c defines its own set of these
+ * arch/parisc/kernel/binfmt_elf32.c defines its own set of these
* macros, and then it includes fs/binfmt_elf.c to provide an alternate
* elf binary handler for 32 bit binaries (on the 64 bit kernel).
*/
#ifdef CONFIG_64BIT
-#define ELF_CLASS ELFCLASS64
+#define ELF_CLASS ELFCLASS64
#else
#define ELF_CLASS ELFCLASS32
#endif
typedef unsigned long elf_greg_t;
-/* This yields a string that ld.so will use to load implementation
- specific libraries for optimization. This is more specific in
- intent than poking at uname or /proc/cpuinfo.
-
- For the moment, we have only optimizations for the Intel generations,
- but that could change... */
+/*
+ * This yields a string that ld.so will use to load implementation
+ * specific libraries for optimization. This is more specific in
+ * intent than poking at uname or /proc/cpuinfo.
+ */
-#define ELF_PLATFORM ("PARISC\0" /*+((boot_cpu_data.x86-3)*5) */)
+#define ELF_PLATFORM ("PARISC\0")
#define SET_PERSONALITY(ex, ibcs2) \
current->personality = PER_LINUX; \
#define ELF_OSABI ELFOSABI_LINUX
/* %r23 is set by ld.so to a pointer to a function which might be
- registered using atexit. This provides a mean for the dynamic
+ registered using atexit. This provides a means for the dynamic
linker to call DT_FINI functions for shared libraries that have
been loaded before the code runs.
but it's not easy, and we've already done it here. */
#define ELF_HWCAP 0
-/* (boot_cpu_data.x86_capability) */
#endif
#define KERNEL_MAP_START (GATEWAY_PAGE_SIZE)
#define KERNEL_MAP_END (TMPALIAS_MAP_START)
-#endif
+#ifndef __ASSEMBLY__
+extern void *vmalloc_start;
+#define PCXL_DMA_MAP_SIZE (8*1024*1024)
+#define VMALLOC_START ((unsigned long)vmalloc_start)
+#define VMALLOC_END (KERNEL_MAP_END)
+#endif /*__ASSEMBLY__*/
+
+#endif /*_ASM_FIXMAP_H*/
int err = 0;
int uval;
+ /* futex.c wants to do a cmpxchg_inatomic on kernel NULL, which is
+ * our gateway page, and causes no end of trouble...
+ */
+ if (segment_eq(KERNEL_DS, get_fs()) && !uaddr)
+ return -EFAULT;
+
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
return uval;
}
-#endif
-#endif
+#endif /*__KERNEL__*/
+#endif /*_ASM_PARISC_FUTEX_H*/
void pdc_io_reset(void);
void pdc_io_reset_devices(void);
int pdc_iodc_getc(void);
-int pdc_iodc_print(unsigned char *str, unsigned count);
-void pdc_printf(const char *fmt, ...);
+int pdc_iodc_print(const unsigned char *str, unsigned count);
void pdc_emergency_unlock(void);
int pdc_sti_call(unsigned long func, unsigned long flags,
free_page((unsigned long)pte);
}
-static inline void pte_free_kernel(struct mm_struct *mm, struct page *pte)
+static inline void pte_free(struct mm_struct *mm, struct page *pte)
{
pgtable_page_dtor(pte);
- pte_free_kernel(page_address((pte));
+ pte_free_kernel(mm, page_address(pte));
}
#define check_pgt_cache() do { } while (0)
#define FIRST_USER_ADDRESS 0
-#ifndef __ASSEMBLY__
-extern void *vmalloc_start;
-#define PCXL_DMA_MAP_SIZE (8*1024*1024)
-#define VMALLOC_START ((unsigned long)vmalloc_start)
-/* this is a fixmap remnant, see fixmap.h */
-#define VMALLOC_END (KERNEL_MAP_END)
-#endif
-
/* NB: The tlb miss handlers make certain assumptions about the order */
/* of the following bits, so be careful (One example, bits 25-31 */
/* are moved together in one instruction). */
#define __NR_timerfd (__NR_Linux + 303)
#define __NR_eventfd (__NR_Linux + 304)
#define __NR_fallocate (__NR_Linux + 305)
+#define __NR_timerfd_create (__NR_Linux + 306)
+#define __NR_timerfd_settime (__NR_Linux + 307)
+#define __NR_timerfd_gettime (__NR_Linux + 308)
-#define __NR_Linux_syscalls (__NR_fallocate + 1)
+#define __NR_Linux_syscalls (__NR_timerfd_gettime + 1)
#define __IGNORE_select /* newselect */
/**
* struct export_operations - for nfsd to communicate with file systems
- * @decode_fh: decode a file handle fragment and return a &struct dentry
* @encode_fh: encode a file handle fragment from a dentry
+ * @fh_to_dentry: find the implied object and get a dentry for it
+ * @fh_to_parent: find the implied object's parent and get a dentry for it
* @get_name: find the name for a given inode in a given directory
* @get_parent: find the parent of a given directory
- * @get_dentry: find a dentry for the inode given a file handle sub-fragment
*
* See Documentation/filesystems/Exporting for details on how to use
* this interface correctly.
struct device dev;
struct bin_attribute *legacy_io; /* legacy I/O for this bus */
struct bin_attribute *legacy_mem; /* legacy mem */
+ unsigned int is_added:1;
};
#define pci_bus_b(n) list_entry(n, struct pci_bus, node)
unsigned short header_len; /* more space at head required */
unsigned short trailer_len; /* space to reserve at tail */
- u32 metrics[RTAX_MAX];
- struct dst_entry *path;
-
- unsigned long rate_last; /* rate limiting for ICMP */
unsigned int rate_tokens;
+ unsigned long rate_last; /* rate limiting for ICMP */
-#ifdef CONFIG_NET_CLS_ROUTE
- __u32 tclassid;
-#endif
+ struct dst_entry *path;
struct neighbour *neighbour;
struct hh_cache *hh;
int (*output)(struct sk_buff*);
struct dst_ops *ops;
-
- unsigned long lastuse;
+
+ u32 metrics[RTAX_MAX];
+
+#ifdef CONFIG_NET_CLS_ROUTE
+ __u32 tclassid;
+#endif
+
+ /*
+ * __refcnt wants to be on a different cache line from
+ * input/output/ops or performance tanks badly
+ */
atomic_t __refcnt; /* client references */
int __use;
+ unsigned long lastuse;
union {
struct dst_entry *next;
struct rtable *rt_next;
initrd_end = 0;
}
-int __init populate_rootfs(void)
+static int __init populate_rootfs(void)
{
char *err = unpack_to_rootfs(__initramfs_start,
__initramfs_end - __initramfs_start, 0);
}
return 0;
}
-#ifndef CONFIG_ACPI_CUSTOM_DSDT_INITRD
-/*
- * if this option is enabled, populate_rootfs() is called _earlier_ in the
- * boot sequence. This insures that the ACPI initialisation can find the file.
- */
rootfs_initcall(populate_rootfs);
-#endif
extern void tc_init(void);
#endif
-#ifdef CONFIG_ACPI_CUSTOM_DSDT_INITRD
-extern int populate_rootfs(void);
-#else
-static inline void populate_rootfs(void) {}
-#endif
-
enum system_states system_state;
EXPORT_SYMBOL(system_state);
check_bugs();
- populate_rootfs(); /* For DSDT override from initramfs */
acpi_early_init(); /* before LAPIC and SMP init */
/* Do the rest non-__init'ed, we're now alive */
notification of APM "events" (e.g. battery status change).
In order to use APM, you will need supporting software. For location
- and more information, read <file:Documentation/pm.txt> and the
+ and more information, read <file:Documentation/power/pm.txt> and the
Battery Powered Linux mini-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
* of @bm->cur_zone_bm are updated.
*/
-static void memory_bm_find_bit(struct memory_bitmap *bm, unsigned long pfn,
+static int memory_bm_find_bit(struct memory_bitmap *bm, unsigned long pfn,
void **addr, unsigned int *bit_nr)
{
struct zone_bitmap *zone_bm;
while (pfn < zone_bm->start_pfn || pfn >= zone_bm->end_pfn) {
zone_bm = zone_bm->next;
- BUG_ON(!zone_bm);
+ if (!zone_bm)
+ return -EFAULT;
}
bm->cur.zone_bm = zone_bm;
}
pfn -= bb->start_pfn;
*bit_nr = pfn % BM_BITS_PER_CHUNK;
*addr = bb->data + pfn / BM_BITS_PER_CHUNK;
+ return 0;
}
static void memory_bm_set_bit(struct memory_bitmap *bm, unsigned long pfn)
{
void *addr;
unsigned int bit;
+ int error;
- memory_bm_find_bit(bm, pfn, &addr, &bit);
+ error = memory_bm_find_bit(bm, pfn, &addr, &bit);
+ BUG_ON(error);
set_bit(bit, addr);
}
+static int mem_bm_set_bit_check(struct memory_bitmap *bm, unsigned long pfn)
+{
+ void *addr;
+ unsigned int bit;
+ int error;
+
+ error = memory_bm_find_bit(bm, pfn, &addr, &bit);
+ if (!error)
+ set_bit(bit, addr);
+ return error;
+}
+
static void memory_bm_clear_bit(struct memory_bitmap *bm, unsigned long pfn)
{
void *addr;
unsigned int bit;
+ int error;
- memory_bm_find_bit(bm, pfn, &addr, &bit);
+ error = memory_bm_find_bit(bm, pfn, &addr, &bit);
+ BUG_ON(error);
clear_bit(bit, addr);
}
{
void *addr;
unsigned int bit;
+ int error;
- memory_bm_find_bit(bm, pfn, &addr, &bit);
+ error = memory_bm_find_bit(bm, pfn, &addr, &bit);
+ BUG_ON(error);
return test_bit(bit, addr);
}
region->end_pfn << PAGE_SHIFT);
for (pfn = region->start_pfn; pfn < region->end_pfn; pfn++)
- if (pfn_valid(pfn))
- memory_bm_set_bit(bm, pfn);
+ if (pfn_valid(pfn)) {
+ /*
+ * It is safe to ignore the result of
+ * mem_bm_set_bit_check() here, since we won't
+ * touch the PFNs for which the error is
+ * returned anyway.
+ */
+ mem_bm_set_bit_check(bm, pfn);
+ }
}
}
/* 'curr' points to currently running entity on this cfs_rq.
* It is set to NULL otherwise (i.e when none are currently running).
*/
- struct sched_entity *curr;
+ struct sched_entity *curr, *next;
unsigned long nr_spread_over;
u64 tmp;
if (unlikely(!lw->inv_weight))
- lw->inv_weight = (WMULT_CONST - lw->weight/2) / lw->weight + 1;
+ lw->inv_weight = (WMULT_CONST-lw->weight/2) / (lw->weight+1);
tmp = (u64)delta_exec * weight;
/*
static inline void update_load_add(struct load_weight *lw, unsigned long inc)
{
lw->weight += inc;
+ lw->inv_weight = 0;
}
static inline void update_load_sub(struct load_weight *lw, unsigned long dec)
{
lw->weight -= dec;
+ lw->inv_weight = 0;
}
/*
oldprio = p->prio;
on_rq = p->se.on_rq;
running = task_current(rq, p);
- if (on_rq) {
+ if (on_rq)
dequeue_task(rq, p, 0);
- if (running)
- p->sched_class->put_prev_task(rq, p);
- }
+ if (running)
+ p->sched_class->put_prev_task(rq, p);
if (rt_prio(prio))
p->sched_class = &rt_sched_class;
p->prio = prio;
+ if (running)
+ p->sched_class->set_curr_task(rq);
if (on_rq) {
- if (running)
- p->sched_class->set_curr_task(rq);
-
enqueue_task(rq, p, 0);
check_class_changed(rq, p, prev_class, oldprio, running);
update_rq_clock(rq);
on_rq = p->se.on_rq;
running = task_current(rq, p);
- if (on_rq) {
+ if (on_rq)
deactivate_task(rq, p, 0);
- if (running)
- p->sched_class->put_prev_task(rq, p);
- }
+ if (running)
+ p->sched_class->put_prev_task(rq, p);
oldprio = p->prio;
__setscheduler(rq, p, policy, param->sched_priority);
+ if (running)
+ p->sched_class->set_curr_task(rq);
if (on_rq) {
- if (running)
- p->sched_class->set_curr_task(rq);
-
activate_task(rq, p, 0);
check_class_changed(rq, p, prev_class, oldprio, running);
running = task_current(rq, tsk);
on_rq = tsk->se.on_rq;
- if (on_rq) {
+ if (on_rq)
dequeue_task(rq, tsk, 0);
- if (unlikely(running))
- tsk->sched_class->put_prev_task(rq, tsk);
- }
+ if (unlikely(running))
+ tsk->sched_class->put_prev_task(rq, tsk);
set_task_rq(tsk, task_cpu(tsk));
tsk->sched_class->moved_group(tsk);
#endif
- if (on_rq) {
- if (unlikely(running))
- tsk->sched_class->set_curr_task(rq);
+ if (unlikely(running))
+ tsk->sched_class->set_curr_task(rq);
+ if (on_rq)
enqueue_task(rq, tsk, 0);
- }
task_rq_unlock(rq, &flags);
}
* Maintain a cache of leftmost tree entries (it is frequently
* used):
*/
- if (leftmost)
+ if (leftmost) {
cfs_rq->rb_leftmost = &se->run_node;
+ /*
+ * maintain cfs_rq->min_vruntime to be a monotonic increasing
+ * value tracking the leftmost vruntime in the tree.
+ */
+ cfs_rq->min_vruntime =
+ max_vruntime(cfs_rq->min_vruntime, se->vruntime);
+ }
rb_link_node(&se->run_node, parent, link);
rb_insert_color(&se->run_node, &cfs_rq->tasks_timeline);
static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
- if (cfs_rq->rb_leftmost == &se->run_node)
- cfs_rq->rb_leftmost = rb_next(&se->run_node);
+ if (cfs_rq->rb_leftmost == &se->run_node) {
+ struct rb_node *next_node;
+ struct sched_entity *next;
+
+ next_node = rb_next(&se->run_node);
+ cfs_rq->rb_leftmost = next_node;
+
+ if (next_node) {
+ next = rb_entry(next_node,
+ struct sched_entity, run_node);
+ cfs_rq->min_vruntime =
+ max_vruntime(cfs_rq->min_vruntime,
+ next->vruntime);
+ }
+ }
+
+ if (cfs_rq->next == se)
+ cfs_rq->next = NULL;
rb_erase(&se->run_node, &cfs_rq->tasks_timeline);
}
*/
static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
- u64 slice = __sched_period(cfs_rq->nr_running);
-
- slice *= se->load.weight;
- do_div(slice, cfs_rq->load.weight);
-
- return slice;
+ return calc_delta_mine(__sched_period(cfs_rq->nr_running),
+ se->load.weight, &cfs_rq->load);
}
/*
unsigned long delta_exec)
{
unsigned long delta_exec_weighted;
- u64 vruntime;
schedstat_set(curr->exec_max, max((u64)delta_exec, curr->exec_max));
&curr->load);
}
curr->vruntime += delta_exec_weighted;
-
- /*
- * maintain cfs_rq->min_vruntime to be a monotonic increasing
- * value tracking the leftmost vruntime in the tree.
- */
- if (first_fair(cfs_rq)) {
- vruntime = min_vruntime(curr->vruntime,
- __pick_next_entity(cfs_rq)->vruntime);
- } else
- vruntime = curr->vruntime;
-
- cfs_rq->min_vruntime =
- max_vruntime(cfs_rq->min_vruntime, vruntime);
}
static void update_curr(struct cfs_rq *cfs_rq)
{
u64 vruntime;
- vruntime = cfs_rq->min_vruntime;
+ if (first_fair(cfs_rq)) {
+ vruntime = min_vruntime(cfs_rq->min_vruntime,
+ __pick_next_entity(cfs_rq)->vruntime);
+ } else
+ vruntime = cfs_rq->min_vruntime;
if (sched_feat(TREE_AVG)) {
struct sched_entity *last = __pick_last_entity(cfs_rq);
if (!initial) {
/* sleeps upto a single latency don't count. */
- if (sched_feat(NEW_FAIR_SLEEPERS))
- vruntime -= sysctl_sched_latency;
+ if (sched_feat(NEW_FAIR_SLEEPERS)) {
+ vruntime -= calc_delta_fair(sysctl_sched_latency,
+ &cfs_rq->load);
+ }
/* ensure we never gain time by being placed backwards. */
vruntime = max_vruntime(se->vruntime, vruntime);
se->prev_sum_exec_runtime = se->sum_exec_runtime;
}
+static struct sched_entity *
+pick_next(struct cfs_rq *cfs_rq, struct sched_entity *se)
+{
+ s64 diff, gran;
+
+ if (!cfs_rq->next)
+ return se;
+
+ diff = cfs_rq->next->vruntime - se->vruntime;
+ if (diff < 0)
+ return se;
+
+ gran = calc_delta_fair(sysctl_sched_wakeup_granularity, &cfs_rq->load);
+ if (diff > gran)
+ return se;
+
+ return cfs_rq->next;
+}
+
static struct sched_entity *pick_next_entity(struct cfs_rq *cfs_rq)
{
struct sched_entity *se = NULL;
if (first_fair(cfs_rq)) {
se = __pick_next_entity(cfs_rq);
+ se = pick_next(cfs_rq, se);
set_next_entity(cfs_rq, se);
}
resched_task(curr);
return;
}
+
+ cfs_rq_of(pse)->next = pse;
+
/*
* Batch tasks do not preempt (their preemption is driven by
* the tick):
start_dma_addr = virt_to_bus(io_tlb_start) & mask;
offset_slots = ALIGN(start_dma_addr, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
- max_slots = ALIGN(mask + 1, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+ max_slots = mask + 1
+ ? ALIGN(mask + 1, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT
+ : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT);
/*
* For mappings greater than a page, we limit the stride (and
index = ALIGN(io_tlb_index, stride);
if (index >= io_tlb_nslabs)
index = 0;
-
- while (is_span_boundary(index, nslots, offset_slots,
- max_slots)) {
- index += stride;
- if (index >= io_tlb_nslabs)
- index = 0;
- }
wrap = index;
do {
+ while (is_span_boundary(index, nslots, offset_slots,
+ max_slots)) {
+ index += stride;
+ if (index >= io_tlb_nslabs)
+ index = 0;
+ if (index == wrap)
+ goto not_found;
+ }
+
/*
* If we find a slot that indicates we have 'nslots'
* number of contiguous buffers, we allocate the
goto found;
}
- do {
- index += stride;
- if (index >= io_tlb_nslabs)
- index = 0;
- } while (is_span_boundary(index, nslots, offset_slots,
- max_slots));
+ index += stride;
+ if (index >= io_tlb_nslabs)
+ index = 0;
} while (index != wrap);
+ not_found:
spin_unlock_irqrestore(&io_tlb_lock, flags);
return NULL;
}
my ($type,$declaration_name,$return_type);
my ($newsection,$newcontents,$prototype,$filelist, $brcount, %source_map);
+if (defined($ENV{'KBUILD_VERBOSE'})) {
+ $verbose = "$ENV{'KBUILD_VERBOSE'}";
+}
+
# Generated docbook code is inserted in a template at a point where
# docbook v3.1 requires a non-zero sequence of RefEntry's; see:
# http://www.oasis-open.org/docbook/documentation/reference/html/refentry.html
#define SMK_MAXLEN 23
#define SMK_LABELLEN (SMK_MAXLEN+1)
-/*
- * How many kinds of access are there?
- * Here's your answer.
- */
-#define SMK_ACCESSDASH '-'
-#define SMK_ACCESSLOW "rwxa"
-#define SMK_ACCESSKINDS (sizeof(SMK_ACCESSLOW) - 1)
-
struct superblock_smack {
char *smk_root;
char *smk_floor;
/*
* Values for parsing cipso rules
* SMK_DIGITLEN: Length of a digit field in a rule.
- * SMK_CIPSOMEN: Minimum possible cipso rule length.
+ * SMK_CIPSOMIN: Minimum possible cipso rule length.
+ * SMK_CIPSOMAX: Maximum possible cipso rule length.
*/
#define SMK_DIGITLEN 4
-#define SMK_CIPSOMIN (SMK_MAXLEN + 2 * SMK_DIGITLEN)
+#define SMK_CIPSOMIN (SMK_LABELLEN + 2 * SMK_DIGITLEN)
+#define SMK_CIPSOMAX (SMK_CIPSOMIN + SMACK_CIPSO_MAXCATNUM * SMK_DIGITLEN)
+
+/*
+ * Values for parsing MAC rules
+ * SMK_ACCESS: Maximum possible combination of access permissions
+ * SMK_ACCESSLEN: Maximum length for a rule access field
+ * SMK_LOADLEN: Smack rule length
+ */
+#define SMK_ACCESS "rwxa"
+#define SMK_ACCESSLEN (sizeof(SMK_ACCESS) - 1)
+#define SMK_LOADLEN (SMK_LABELLEN + SMK_LABELLEN + SMK_ACCESSLEN)
+
/*
* Seq_file read operations for /smack/load
* The format is exactly:
* char subject[SMK_LABELLEN]
* char object[SMK_LABELLEN]
- * char access[SMK_ACCESSKINDS]
- *
- * Anything following is commentary and ignored.
+ * char access[SMK_ACCESSLEN]
*
- * writes must be SMK_LABELLEN+SMK_LABELLEN+4 bytes.
+ * writes must be SMK_LABELLEN+SMK_LABELLEN+SMK_ACCESSLEN bytes.
*/
-#define MINIMUM_LOAD (SMK_LABELLEN + SMK_LABELLEN + SMK_ACCESSKINDS)
-
static ssize_t smk_write_load(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
return -EPERM;
if (*ppos != 0)
return -EINVAL;
- if (count < MINIMUM_LOAD)
+ if (count != SMK_LOADLEN)
return -EINVAL;
data = kzalloc(count, GFP_KERNEL);
return -EPERM;
if (*ppos != 0)
return -EINVAL;
- if (count <= SMK_CIPSOMIN)
+ if (count < SMK_CIPSOMIN || count > SMK_CIPSOMAX)
return -EINVAL;
data = kzalloc(count + 1, GFP_KERNEL);
if (ret != 1 || catlen > SMACK_CIPSO_MAXCATNUM)
goto out;
- if (count <= (SMK_CIPSOMIN + catlen * SMK_DIGITLEN))
+ if (count != (SMK_CIPSOMIN + catlen * SMK_DIGITLEN))
goto out;
memset(mapcatset, 0, sizeof(mapcatset));