Note that, rcu_assign_pointer() and rcu_dereference() relate to
SRCU just as they do to other forms of RCU.
+
+15. The whole point of call_rcu(), synchronize_rcu(), and friends
+ is to wait until all pre-existing readers have finished before
+ carrying out some otherwise-destructive operation. It is
+ therefore critically important to -first- remove any path
+ that readers can follow that could be affected by the
+ destructive operation, and -only- -then- invoke call_rcu(),
+ synchronize_rcu(), or friends.
+
+ Because these primitives only wait for pre-existing readers,
+ it is the caller's responsibility to guarantee safety to
+ any subsequent readers.
Secmark, it is time to deprecate the older mechanism and start the
process of removing the old code.
Who: Paul Moore <paul.moore@hp.com>
+---------------------------
+
+What: sysfs ui for changing p4-clockmod parameters
+When: September 2009
+Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and
+ e088e4c9cdb618675874becb91b2fd581ee707e6.
+ Removal is subject to fixing any remaining bugs in ACPI which may
+ cause the thermal throttling not to happen at the right time.
+Who: Dave Jones <davej@redhat.com>, Matthew Garrett <mjg@redhat.com>
Squashfs Cramfs
-Max filesystem size: 2^64 16 MiB
+Max filesystem size: 2^64 256 MiB
Max file size: ~ 2 TiB 16 MiB
Max files: unlimited unlimited
Max directories: unlimited unlimited
--- /dev/null
+
+Options for the ipv6 module are supplied as parameters at load time.
+
+Module options may be given as command line arguments to the insmod
+or modprobe command, but are usually specified in either the
+/etc/modules.conf or /etc/modprobe.conf configuration file, or in a
+distro-specific configuration file.
+
+The available ipv6 module parameters are listed below. If a parameter
+is not specified the default value is used.
+
+The parameters are as follows:
+
+disable
+
+ Specifies whether to load the IPv6 module, but disable all
+ its functionality. This might be used when another module
+ has a dependency on the IPv6 module being loaded, but no
+ IPv6 addresses or operations are desired.
+
+ The possible values and their effects are:
+
+ 0
+ IPv6 is enabled.
+
+ This is the default value.
+
+ 1
+ IPv6 is disabled.
+
+ No IPv6 addresses will be added to interfaces, and
+ it will not be possible to open an IPv6 socket.
+
+ A reboot is required to enable IPv6.
+
============
The Chelsio T3 ASIC based Adapters (S310, S320, S302, S304, Mezz cards, etc.
-series of products) supports iSCSI acceleration and iSCSI Direct Data Placement
+series of products) support iSCSI acceleration and iSCSI Direct Data Placement
(DDP) where the hardware handles the expensive byte touching operations, such
as CRC computation and verification, and direct DMA to the final host memory
destination:
the TCP segments onto the wire. It handles TCP retransmission if
needed.
- On receving, S3 h/w recovers the iSCSI PDU by reassembling TCP
+ On receiving, S3 h/w recovers the iSCSI PDU by reassembling TCP
segments, separating the header and data, calculating and verifying
- the digests, then forwards the header to the host. The payload data,
+ the digests, then forwarding the header to the host. The payload data,
if possible, will be directly placed into the pre-posted host DDP
buffer. Otherwise, the payload data will be sent to the host too.
sure the ip address is unique in the network.
3. edit /etc/iscsi/iscsid.conf
- The default setting for MaxRecvDataSegmentLength (131072) is too big,
- replace "node.conn[0].iscsi.MaxRecvDataSegmentLength" to be a value no
- bigger than 15360 (for example 8192):
+ The default setting for MaxRecvDataSegmentLength (131072) is too big;
+ replace with a value no bigger than 15360 (for example 8192):
node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192
The payload may be compressed. The format of both the compressed and
uncompressed data should be determined using the standard magic
- numbers. Currently only gzip compressed ELF is used.
+ numbers. The currently supported compression formats are gzip
+ (magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A) and LZMA
+ (magic number 5D 00). The uncompressed payload is currently always ELF
+ (magic number 7F 45 4C 46).
Field name: payload_length
Type: read
--- /dev/null
+
+Mini-HOWTO for using the earlyprintk=dbgp boot option with a
+USB2 Debug port key and a debug cable, on x86 systems.
+
+You need two computers, the 'USB debug key' special gadget and
+and two USB cables, connected like this:
+
+ [host/target] <-------> [USB debug key] <-------> [client/console]
+
+1. There are three specific hardware requirements:
+
+ a.) Host/target system needs to have USB debug port capability.
+
+ You can check this capability by looking at a 'Debug port' bit in
+ the lspci -vvv output:
+
+ # lspci -vvv
+ ...
+ 00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) (prog-if 20 [EHCI])
+ Subsystem: Lenovo ThinkPad T61
+ Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
+ Latency: 0
+ Interrupt: pin D routed to IRQ 19
+ Region 0: Memory at fe227000 (32-bit, non-prefetchable) [size=1K]
+ Capabilities: [50] Power Management version 2
+ Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
+ Status: D0 PME-Enable- DSel=0 DScale=0 PME+
+ Capabilities: [58] Debug port: BAR=1 offset=00a0
+ ^^^^^^^^^^^ <==================== [ HERE ]
+ Kernel driver in use: ehci_hcd
+ Kernel modules: ehci-hcd
+ ...
+
+( If your system does not list a debug port capability then you probably
+ wont be able to use the USB debug key. )
+
+ b.) You also need a Netchip USB debug cable/key:
+
+ http://www.plxtech.com/products/NET2000/NET20DC/default.asp
+
+ This is a small blue plastic connector with two USB connections,
+ it draws power from its USB connections.
+
+ c.) Thirdly, you need a second client/console system with a regular USB port.
+
+2. Software requirements:
+
+ a.) On the host/target system:
+
+ You need to enable the following kernel config option:
+
+ CONFIG_EARLY_PRINTK_DBGP=y
+
+ And you need to add the boot command line: "earlyprintk=dbgp".
+ (If you are using Grub, append it to the 'kernel' line in
+ /etc/grub.conf)
+
+ NOTE: normally earlyprintk console gets turned off once the
+ regular console is alive - use "earlyprintk=dbgp,keep" to keep
+ this channel open beyond early bootup. This can be useful for
+ debugging crashes under Xorg, etc.
+
+ b.) On the client/console system:
+
+ You should enable the following kernel config option:
+
+ CONFIG_USB_SERIAL_DEBUG=y
+
+ On the next bootup with the modified kernel you should
+ get a /dev/ttyUSBx device(s).
+
+ Now this channel of kernel messages is ready to be used: start
+ your favorite terminal emulator (minicom, etc.) and set
+ it up to use /dev/ttyUSB0 - or use a raw 'cat /dev/ttyUSBx' to
+ see the raw output.
+
+ c.) On Nvidia Southbridge based systems: the kernel will try to probe
+ and find out which port has debug device connected.
+
+3. Testing that it works fine:
+
+ You can test the output by using earlyprintk=dbgp,keep and provoking
+ kernel messages on the host/target system. You can provoke a harmless
+ kernel message by for example doing:
+
+ echo h > /proc/sysrq-trigger
+
+ On the host/target system you should see this help line in "dmesg" output:
+
+ SysRq : HELP : loglevel(0-9) reBoot Crashdump terminate-all-tasks(E) memory-full-oom-kill(F) kill-all-tasks(I) saK show-backtrace-all-active-cpus(L) show-memory-usage(M) nice-all-RT-tasks(N) powerOff show-registers(P) show-all-timers(Q) unRaw Sync show-task-states(T) Unmount show-blocked-tasks(W) dump-ftrace-buffer(Z)
+
+ On the client/console system do:
+
+ cat /dev/ttyUSB0
+
+ And you should see the help line above displayed shortly after you've
+ provoked it on the host system.
+
+If it does not work then please ask about it on the linux-kernel@vger.kernel.org
+mailing list or contact the x86 maintainers.
ISDN SUBSYSTEM
P: Karsten Keil
-M: kkeil@suse.de
+M: isdn@linux-pingi.de
L: isdn4linux@listserv.isdn4linux.de (subscribers-only)
W: http://www.isdn4linux.de
T: git kernel.org:/pub/scm/linux/kernel/kkeil/isdn-2.6.git
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 29
-EXTRAVERSION = -rc6
+EXTRAVERSION = -rc7
NAME = Erotic Pickled Herring
# *DOCUMENTATION*
if (alpha_using_srm) {
static struct vm_struct console_remap_vm;
- unsigned long vaddr = VMALLOC_START;
+ unsigned long nr_pages = 0;
+ unsigned long vaddr;
unsigned long i, j;
+ /* calculate needed size */
+ for (i = 0; i < crb->map_entries; ++i)
+ nr_pages += crb->map[i].count;
+
+ /* register the vm area */
+ console_remap_vm.flags = VM_ALLOC;
+ console_remap_vm.size = nr_pages << PAGE_SHIFT;
+ vm_area_register_early(&console_remap_vm, PAGE_SIZE);
+
+ vaddr = (unsigned long)console_remap_vm.addr;
+
/* Set up the third level PTEs and update the virtual
addresses of the CRB entries. */
for (i = 0; i < crb->map_entries; ++i) {
vaddr += PAGE_SIZE;
}
}
-
- /* Let vmalloc know that we've allocated some space. */
- console_remap_vm.flags = VM_ALLOC;
- console_remap_vm.addr = (void *) VMALLOC_START;
- console_remap_vm.size = vaddr - VMALLOC_START;
- vmlist = &console_remap_vm;
}
callback_init_done = 1;
unsigned int cachetype = read_cpuid_cachetype();
unsigned int arch = cpu_architecture();
- if (arch >= CPU_ARCH_ARMv7) {
- cacheid = CACHEID_VIPT_NONALIASING;
- if ((cachetype & (3 << 14)) == 1 << 14)
- cacheid |= CACHEID_ASID_TAGGED;
- } else if (arch >= CPU_ARCH_ARMv6) {
- if (cachetype & (1 << 23))
+ if (arch >= CPU_ARCH_ARMv6) {
+ if ((cachetype & (7 << 29)) == 4 << 29) {
+ /* ARMv7 register format */
+ cacheid = CACHEID_VIPT_NONALIASING;
+ if ((cachetype & (3 << 14)) == 1 << 14)
+ cacheid |= CACHEID_ASID_TAGGED;
+ } else if (cachetype & (1 << 23))
cacheid = CACHEID_VIPT_ALIASING;
else
cacheid = CACHEID_VIPT_NONALIASING;
void __init at91_add_device_mmc(short mmc_id, struct at91_mmc_data *data) {}
#endif
+/* --------------------------------------------------------------------
+ * Compact Flash (PCMCIA or IDE)
+ * -------------------------------------------------------------------- */
+
+#if defined(CONFIG_AT91_CF) || defined(CONFIG_AT91_CF_MODULE) || \
+ defined(CONFIG_BLK_DEV_IDE_AT91) || defined(CONFIG_BLK_DEV_IDE_AT91_MODULE)
+
+static struct at91_cf_data cf0_data;
+
+static struct resource cf0_resources[] = {
+ [0] = {
+ .start = AT91_CHIPSELECT_4,
+ .end = AT91_CHIPSELECT_4 + SZ_256M - 1,
+ .flags = IORESOURCE_MEM | IORESOURCE_MEM_8AND16BIT,
+ }
+};
+
+static struct platform_device cf0_device = {
+ .id = 0,
+ .dev = {
+ .platform_data = &cf0_data,
+ },
+ .resource = cf0_resources,
+ .num_resources = ARRAY_SIZE(cf0_resources),
+};
+
+static struct at91_cf_data cf1_data;
+
+static struct resource cf1_resources[] = {
+ [0] = {
+ .start = AT91_CHIPSELECT_5,
+ .end = AT91_CHIPSELECT_5 + SZ_256M - 1,
+ .flags = IORESOURCE_MEM | IORESOURCE_MEM_8AND16BIT,
+ }
+};
+
+static struct platform_device cf1_device = {
+ .id = 1,
+ .dev = {
+ .platform_data = &cf1_data,
+ },
+ .resource = cf1_resources,
+ .num_resources = ARRAY_SIZE(cf1_resources),
+};
+
+void __init at91_add_device_cf(struct at91_cf_data *data)
+{
+ unsigned long ebi0_csa;
+ struct platform_device *pdev;
+
+ if (!data)
+ return;
+
+ /*
+ * assign CS4 or CS5 to SMC with Compact Flash logic support,
+ * we assume SMC timings are configured by board code,
+ * except True IDE where timings are controlled by driver
+ */
+ ebi0_csa = at91_sys_read(AT91_MATRIX_EBI0CSA);
+ switch (data->chipselect) {
+ case 4:
+ at91_set_A_periph(AT91_PIN_PD6, 0); /* EBI0_NCS4/CFCS0 */
+ ebi0_csa |= AT91_MATRIX_EBI0_CS4A_SMC_CF1;
+ cf0_data = *data;
+ pdev = &cf0_device;
+ break;
+ case 5:
+ at91_set_A_periph(AT91_PIN_PD7, 0); /* EBI0_NCS5/CFCS1 */
+ ebi0_csa |= AT91_MATRIX_EBI0_CS5A_SMC_CF2;
+ cf1_data = *data;
+ pdev = &cf1_device;
+ break;
+ default:
+ printk(KERN_ERR "AT91 CF: bad chip-select requested (%u)\n",
+ data->chipselect);
+ return;
+ }
+ at91_sys_write(AT91_MATRIX_EBI0CSA, ebi0_csa);
+
+ if (data->det_pin) {
+ at91_set_gpio_input(data->det_pin, 1);
+ at91_set_deglitch(data->det_pin, 1);
+ }
+
+ if (data->irq_pin) {
+ at91_set_gpio_input(data->irq_pin, 1);
+ at91_set_deglitch(data->irq_pin, 1);
+ }
+
+ if (data->vcc_pin)
+ /* initially off */
+ at91_set_gpio_output(data->vcc_pin, 0);
+
+ /* enable EBI controlled pins */
+ at91_set_A_periph(AT91_PIN_PD5, 1); /* NWAIT */
+ at91_set_A_periph(AT91_PIN_PD8, 0); /* CFCE1 */
+ at91_set_A_periph(AT91_PIN_PD9, 0); /* CFCE2 */
+ at91_set_A_periph(AT91_PIN_PD14, 0); /* CFNRW */
+
+ pdev->name = (data->flags & AT91_CF_TRUE_IDE) ? "at91_ide" : "at91_cf";
+ platform_device_register(pdev);
+}
+#else
+void __init at91_add_device_cf(struct at91_cf_data *data) {}
+#endif
/* --------------------------------------------------------------------
* NAND / SmartMedia
u8 vcc_pin; /* power switching */
u8 rst_pin; /* card reset */
u8 chipselect; /* EBI Chip Select number */
+ u8 flags;
+#define AT91_CF_TRUE_IDE 0x01
+#define AT91_IDE_SWAP_A0_A2 0x02
};
extern void __init at91_add_device_cf(struct at91_cf_data *data);
at91_sys_read(AT91_AIC_IPR) & at91_sys_read(AT91_AIC_IMR));
error:
- sdram_selfrefresh_disable();
target_state = PM_SUSPEND_ON;
at91_irq_resume();
at91_gpio_resume();
}
ldp_smc911x_resources[0].start = cs_mem_base + 0x0;
- ldp_smc911x_resources[0].end = cs_mem_base + 0xf;
+ ldp_smc911x_resources[0].end = cs_mem_base + 0xff;
udelay(100);
eth_gpio = LDP_SMC911X_GPIO;
#ifdef CONFIG_CPU_32v6K
clrex
#else
- strex r0, r1, [sp] @ Clear the exclusive monitor
+ sub r1, sp, #4 @ Get unused stack location
+ strex r0, r1, [r1] @ Clear the exclusive monitor
#endif
mrc p15, 0, r1, c5, c0, 0 @ get FSR
mrc p15, 0, r0, c6, c0, 0 @ get FAR
u32 mask;
mask = __raw_readl(S3C64XX_EINT0MASK);
- mask |= eint_irq_to_bit(irq);
+ mask &= ~eint_irq_to_bit(irq);
__raw_writel(mask, S3C64XX_EINT0MASK);
}
config QUICKLIST
def_bool y
-config HAVE_ARCH_BOOTMEM_NODE
+config HAVE_ARCH_BOOTMEM
def_bool n
config ARCH_HAVE_MEMORY_PRESENT
config PM_WAKEUP_BY_GPIO
bool "Allow Wakeup from Standby by GPIO"
+ depends on PM && !BF54x
config PM_WAKEUP_GPIO_NUMBER
int "GPIO number"
default n
help
Enable General-Purpose Wake-Up (Voltage Regulator Power-Up)
+ (all processors, except ADSP-BF549). This option sets
+ the general-purpose wake-up enable (GPWE) control bit to enable
+ wake-up upon detection of an active low signal on the /GPW (PH7) pin.
+ On ADSP-BF549 this option enables the the same functionality on the
+ /MRXON pin also PH7.
+
endmenu
menu "CPU Frequency scaling"
config HAVE_ARCH_KGDB
def_bool y
-config KGDB_TESTCASE
- tristate "KGDB: for test case in expect"
- default n
- help
- This is a kgdb test case for automated testing.
-
config DEBUG_VERBOSE
bool "Verbose fault messages"
default y
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.28-rc2
-# Fri Jan 9 17:58:41 2009
+# Linux kernel version: 2.6.28
+# Fri Feb 20 10:01:44 2009
#
# CONFIG_MMU is not set
# CONFIG_FPU is not set
# CONFIG_BF538 is not set
# CONFIG_BF539 is not set
# CONFIG_BF542 is not set
+# CONFIG_BF542M is not set
# CONFIG_BF544 is not set
+# CONFIG_BF544M is not set
# CONFIG_BF547 is not set
+# CONFIG_BF547M is not set
# CONFIG_BF548 is not set
+# CONFIG_BF548M is not set
# CONFIG_BF549 is not set
+# CONFIG_BF549M is not set
# CONFIG_BF561 is not set
CONFIG_BF_REV_MIN=0
CONFIG_BF_REV_MAX=2
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
-# CONFIG_NET_DSA is not set
+CONFIG_NET_DSA=y
+# CONFIG_NET_DSA_TAG_DSA is not set
+# CONFIG_NET_DSA_TAG_EDSA is not set
+# CONFIG_NET_DSA_TAG_TRAILER is not set
+CONFIG_NET_DSA_TAG_STPID=y
+# CONFIG_NET_DSA_MV88E6XXX is not set
+# CONFIG_NET_DSA_MV88E6060 is not set
+# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
+# CONFIG_NET_DSA_MV88E6131 is not set
+# CONFIG_NET_DSA_MV88E6123_61_65 is not set
+CONFIG_NET_DSA_KSZ8893M=y
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
#
# Self-contained MTD device drivers
#
+# CONFIG_MTD_DATAFLASH is not set
+# CONFIG_MTD_M25P80 is not set
# CONFIG_MTD_SLRAM is not set
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_BLK_DEV_HD is not set
CONFIG_MISC_DEVICES=y
# CONFIG_EEPROM_93CX6 is not set
+# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
+# CONFIG_C2PORT is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set
# CONFIG_SMC91X is not set
# CONFIG_SMSC911X is not set
# CONFIG_DM9000 is not set
+# CONFIG_ENC28J60 is not set
# CONFIG_IBM_NEW_EMAC_ZMII is not set
# CONFIG_IBM_NEW_EMAC_RGMII is not set
# CONFIG_IBM_NEW_EMAC_TAH is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_I2C_DEBUG_CHIP is not set
-# CONFIG_SPI is not set
+CONFIG_SPI=y
+# CONFIG_SPI_DEBUG is not set
+CONFIG_SPI_MASTER=y
+
+#
+# SPI Master Controller Drivers
+#
+CONFIG_SPI_BFIN=y
+# CONFIG_SPI_BFIN_LOCK is not set
+# CONFIG_SPI_BITBANG is not set
+
+#
+# SPI Protocol Masters
+#
+# CONFIG_SPI_AT25 is not set
+# CONFIG_SPI_SPIDEV is not set
+# CONFIG_SPI_TLE62X0 is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_TMIO is not set
+# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM8350_I2C is not set
+# CONFIG_REGULATOR is not set
#
# Multimedia devices
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
+# CONFIG_RTC_DRV_RX8581 is not set
#
# SPI RTC drivers
#
+# CONFIG_RTC_DRV_M41T94 is not set
+# CONFIG_RTC_DRV_DS1305 is not set
+# CONFIG_RTC_DRV_DS1390 is not set
+# CONFIG_RTC_DRV_MAX6902 is not set
+# CONFIG_RTC_DRV_R9701 is not set
+# CONFIG_RTC_DRV_RS5C348 is not set
+# CONFIG_RTC_DRV_DS3234 is not set
#
# Platform RTC drivers
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_FAULT_INJECTION is not set
CONFIG_SYSCTL_SYSCALL_CHECK=y
+
+#
+# Tracers
+#
+# CONFIG_SCHED_TRACER is not set
+# CONFIG_CONTEXT_SWITCH_TRACER is not set
+# CONFIG_BOOT_TRACER is not set
# CONFIG_DYNAMIC_PRINTK_DEBUG is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
# CONFIG_DEBUG_STACKOVERFLOW is not set
# CONFIG_DEBUG_STACK_USAGE is not set
+# CONFIG_KGDB_TESTCASE is not set
CONFIG_DEBUG_VERBOSE=y
CONFIG_DEBUG_MMRS=y
# CONFIG_DEBUG_HWERR is not set
#
# CONFIG_CRYPTO_FIPS is not set
# CONFIG_CRYPTO_MANAGER is not set
+# CONFIG_CRYPTO_MANAGER2 is not set
# CONFIG_CRYPTO_GF128MUL is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_MPU is not set
#
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_MPU is not set
#
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_MPU is not set
#
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_MPU is not set
#
# CONFIG_MTD_DOC2000 is not set
# CONFIG_MTD_DOC2001 is not set
# CONFIG_MTD_DOC2001PLUS is not set
-CONFIG_MTD_NAND=m
-# CONFIG_MTD_NAND_VERIFY_WRITE is not set
-# CONFIG_MTD_NAND_ECC_SMC is not set
-# CONFIG_MTD_NAND_MUSEUM_IDS is not set
-# CONFIG_MTD_NAND_BFIN is not set
-CONFIG_MTD_NAND_IDS=m
-# CONFIG_MTD_NAND_DISKONCHIP is not set
-# CONFIG_MTD_NAND_NANDSIM is not set
-CONFIG_MTD_NAND_PLATFORM=m
+# CONFIG_MTD_NAND is not set
# CONFIG_MTD_ONENAND is not set
#
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_MPU is not set
#
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_BFIN_L2_CACHEABLE is not set
# CONFIG_MPU is not set
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_TGT is not set
# CONFIG_SCSI_NETLINK is not set
-CONFIG_SCSI_PROC_FS=y
+# CONFIG_SCSI_PROC_FS is not set
#
# SCSI support type (disk, tape, CD-ROM)
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_BFIN_L2_CACHEABLE is not set
# CONFIG_MPU is not set
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_MPU is not set
#
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
# CONFIG_MPU is not set
#
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
CONFIG_L1_MAX_PIECE=16
# CONFIG_MPU is not set
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_TGT is not set
# CONFIG_SCSI_NETLINK is not set
-CONFIG_SCSI_PROC_FS=y
+# CONFIG_SCSI_PROC_FS is not set
#
# SCSI support type (disk, tape, CD-ROM)
CONFIG_SCSI=y
# CONFIG_SCSI_TGT is not set
# CONFIG_SCSI_NETLINK is not set
-CONFIG_SCSI_PROC_FS=y
+# CONFIG_SCSI_PROC_FS is not set
#
# SCSI support type (disk, tape, CD-ROM)
CONFIG_BFIN_DCACHE=y
# CONFIG_BFIN_DCACHE_BANKA is not set
# CONFIG_BFIN_ICACHE_LOCK is not set
-# CONFIG_BFIN_WB is not set
-CONFIG_BFIN_WT=y
+CONFIG_BFIN_WB=y
+# CONFIG_BFIN_WT is not set
CONFIG_L1_MAX_PIECE=16
#
include include/asm-generic/Kbuild.asm
+unifdef-y += bfin_sport.h
unifdef-y += fixed_code.h
/*
- * File: include/asm-blackfin/bfin_sport.h
- * Based on:
- * Author: Roy Huang (roy.huang@analog.com)
+ * bfin_sport.h - userspace header for bfin sport driver
*
- * Created: Thu Aug. 24 2006
- * Description:
+ * Copyright 2004-2008 Analog Devices Inc.
*
- * Modified:
- * Copyright 2004-2006 Analog Devices Inc.
- *
- * Bugs: Enter bugs at http://blackfin.uclinux.org/
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see the file COPYING, or write
- * to the Free Software Foundation, Inc.,
- * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ * Licensed under the GPL-2 or later.
*/
#ifndef __BFIN_SPORT_H__
#define NORM_FORMAT 0x0
#define ALAW_FORMAT 0x2
#define ULAW_FORMAT 0x3
-struct sport_register;
/* Function driver which use sport must initialize the structure */
struct sport_config {
- /*TDM (multichannels), I2S or other mode */
+ /* TDM (multichannels), I2S or other mode */
unsigned int mode:3;
/* if TDM mode is selected, channels must be set */
int serial_clk;
int fsync_clk;
- unsigned int data_format:2; /*Normal, u-law or a-law */
+ unsigned int data_format:2; /* Normal, u-law or a-law */
int word_len; /* How length of the word in bits, 3-32 bits */
int dma_enabled;
};
+/* Userspace interface */
+#define SPORT_IOC_MAGIC 'P'
+#define SPORT_IOC_CONFIG _IOWR('P', 0x01, struct sport_config)
+
+#ifdef __KERNEL__
+
struct sport_register {
unsigned short tcr1;
unsigned short reserved0;
unsigned long mrcs3;
};
-#define SPORT_IOC_MAGIC 'P'
-#define SPORT_IOC_CONFIG _IOWR('P', 0x01, struct sport_config)
-
struct sport_dev {
struct cdev cdev; /* Char device structure */
struct sport_config config;
};
+#endif
+
#define SPORT_TCR1 0
#define SPORT_TCR2 1
#define SPORT_TCLKDIV 2
#define SPORT_MRCS2 22
#define SPORT_MRCS3 23
-#endif /*__BFIN_SPORT_H__*/
+#endif
#include <asm/atomic.h>
#include <asm/traps.h>
-#define IPIPE_ARCH_STRING "1.8-00"
+#define IPIPE_ARCH_STRING "1.9-00"
#define IPIPE_MAJOR_NUMBER 1
-#define IPIPE_MINOR_NUMBER 8
+#define IPIPE_MINOR_NUMBER 9
#define IPIPE_PATCH_NUMBER 0
#ifdef CONFIG_SMP
"%2 = CYCLES2\n" \
"CC = %2 == %0\n" \
"if ! CC jump 1b\n" \
- : "=r" (((unsigned long *)&t)[1]), \
- "=r" (((unsigned long *)&t)[0]), \
- "=r" (__cy2) \
+ : "=d,a" (((unsigned long *)&t)[1]), \
+ "=d,a" (((unsigned long *)&t)[0]), \
+ "=d,a" (__cy2) \
: /*no input*/ : "CC"); \
t; \
})
#define __ipipe_disable_irq(irq) (irq_desc[irq].chip->mask(irq))
-#define __ipipe_lock_root() \
- set_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags)
+static inline int __ipipe_check_tickdev(const char *devname)
+{
+ return 1;
+}
-#define __ipipe_unlock_root() \
- clear_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags)
+static inline void __ipipe_lock_root(void)
+{
+ set_bit(IPIPE_SYNCDEFER_FLAG, &ipipe_root_cpudom_var(status));
+}
+
+static inline void __ipipe_unlock_root(void)
+{
+ clear_bit(IPIPE_SYNCDEFER_FLAG, &ipipe_root_cpudom_var(status));
+}
void __ipipe_enable_pipeline(void);
#define __ipipe_hook_critical_ipi(ipd) do { } while (0)
-#define __ipipe_sync_pipeline(syncmask) \
- do { \
- struct ipipe_domain *ipd = ipipe_current_domain; \
- if (likely(ipd != ipipe_root_domain || !test_bit(IPIPE_ROOTLOCK_FLAG, &ipd->flags))) \
- __ipipe_sync_stage(syncmask); \
- } while (0)
+#define __ipipe_sync_pipeline ___ipipe_sync_pipeline
+void ___ipipe_sync_pipeline(unsigned long syncmask);
void __ipipe_handle_irq(unsigned irq, struct pt_regs *regs);
int __ipipe_get_irq_priority(unsigned irq);
-int __ipipe_get_irqthread_priority(unsigned irq);
-
void __ipipe_stall_root_raw(void);
void __ipipe_unstall_root_raw(void);
void __ipipe_serial_debug(const char *fmt, ...);
+asmlinkage void __ipipe_call_irqtail(unsigned long addr);
+
DECLARE_PER_CPU(struct pt_regs, __ipipe_tick_regs);
extern unsigned long __ipipe_core_clock;
#define __ipipe_run_irqtail() /* Must be a macro */ \
do { \
- asmlinkage void __ipipe_call_irqtail(void); \
unsigned long __pending; \
- CSYNC(); \
+ CSYNC(); \
__pending = bfin_read_IPEND(); \
if (__pending & 0x8000) { \
__pending &= ~0x8010; \
if (__pending && (__pending & (__pending - 1)) == 0) \
- __ipipe_call_irqtail(); \
+ __ipipe_call_irqtail(__ipipe_irq_tail_hook); \
} \
} while (0)
#define __ipipe_run_isr(ipd, irq) \
do { \
if (ipd == ipipe_root_domain) { \
- /* \
- * Note: the I-pipe implements a threaded interrupt model on \
- * this arch for Linux external IRQs. The interrupt handler we \
- * call here only wakes up the associated IRQ thread. \
- */ \
- if (ipipe_virtual_irq_p(irq)) { \
- /* No irqtail here; virtual interrupts have no effect \
- on IPEND so there is no need for processing \
- deferral. */ \
- local_irq_enable_nohead(ipd); \
+ local_irq_enable_hw(); \
+ if (ipipe_virtual_irq_p(irq)) \
ipd->irqs[irq].handler(irq, ipd->irqs[irq].cookie); \
- local_irq_disable_nohead(ipd); \
- } else \
- /* \
- * No need to run the irqtail here either; \
- * we can't be preempted by hw IRQs, so \
- * non-Linux IRQs cannot stack over the short \
- * thread wakeup code. Which in turn means \
- * that no irqtail condition could be pending \
- * for domains above Linux in the pipeline. \
- */ \
+ else \
ipd->irqs[irq].handler(irq, &__raw_get_cpu_var(__ipipe_tick_regs)); \
+ local_irq_disable_hw(); \
} else { \
__clear_bit(IPIPE_SYNC_FLAG, &ipipe_cpudom_var(ipd, status)); \
local_irq_enable_nohead(ipd); \
int ipipe_start_irq_thread(unsigned irq, struct irq_desc *desc);
-#define IS_SYSIRQ(irq) ((irq) > IRQ_CORETMR && (irq) <= SYS_IRQS)
-#define IS_GPIOIRQ(irq) ((irq) >= GPIO_IRQ_BASE && (irq) < NR_IRQS)
-
+#ifdef CONFIG_GENERIC_CLOCKEVENTS
+#define IRQ_SYSTMR IRQ_CORETMR
+#define IRQ_PRIOTMR IRQ_CORETMR
+#else
#define IRQ_SYSTMR IRQ_TIMER0
#define IRQ_PRIOTMR CONFIG_IRQ_TIMER0
+#endif
-#if defined(CONFIG_BF531) || defined(CONFIG_BF532) || defined(CONFIG_BF533)
-#define PRIO_GPIODEMUX(irq) CONFIG_PFA
-#elif defined(CONFIG_BF534) || defined(CONFIG_BF536) || defined(CONFIG_BF537)
-#define PRIO_GPIODEMUX(irq) CONFIG_IRQ_PROG_INTA
-#elif defined(CONFIG_BF52x)
-#define PRIO_GPIODEMUX(irq) ((irq) == IRQ_PORTF_INTA ? CONFIG_IRQ_PORTF_INTA : \
- (irq) == IRQ_PORTG_INTA ? CONFIG_IRQ_PORTG_INTA : \
- (irq) == IRQ_PORTH_INTA ? CONFIG_IRQ_PORTH_INTA : \
- -1)
-#elif defined(CONFIG_BF561)
-#define PRIO_GPIODEMUX(irq) ((irq) == IRQ_PROG0_INTA ? CONFIG_IRQ_PROG0_INTA : \
- (irq) == IRQ_PROG1_INTA ? CONFIG_IRQ_PROG1_INTA : \
- (irq) == IRQ_PROG2_INTA ? CONFIG_IRQ_PROG2_INTA : \
- -1)
+#ifdef CONFIG_BF561
#define bfin_write_TIMER_DISABLE(val) bfin_write_TMRS8_DISABLE(val)
#define bfin_write_TIMER_ENABLE(val) bfin_write_TMRS8_ENABLE(val)
#define bfin_write_TIMER_STATUS(val) bfin_write_TMRS8_STATUS(val)
#define bfin_read_TIMER_STATUS() bfin_read_TMRS8_STATUS()
#elif defined(CONFIG_BF54x)
-#define PRIO_GPIODEMUX(irq) ((irq) == IRQ_PINT0 ? CONFIG_IRQ_PINT0 : \
- (irq) == IRQ_PINT1 ? CONFIG_IRQ_PINT1 : \
- (irq) == IRQ_PINT2 ? CONFIG_IRQ_PINT2 : \
- (irq) == IRQ_PINT3 ? CONFIG_IRQ_PINT3 : \
- -1)
#define bfin_write_TIMER_DISABLE(val) bfin_write_TIMER_DISABLE0(val)
#define bfin_write_TIMER_ENABLE(val) bfin_write_TIMER_ENABLE0(val)
#define bfin_write_TIMER_STATUS(val) bfin_write_TIMER_STATUS0(val)
#define bfin_read_TIMER_STATUS(val) bfin_read_TIMER_STATUS0(val)
-#else
-# error "no PRIO_GPIODEMUX() for this part"
#endif
#define __ipipe_root_tick_p(regs) ((regs->ipend & 0x10) != 0)
#endif /* !CONFIG_IPIPE */
+#define ipipe_update_tick_evtdev(evtdev) do { } while (0)
+
#endif /* !__ASM_BLACKFIN_IPIPE_H */
/* -*- linux-c -*-
- * include/asm-blackfin/_baseipipe.h
+ * include/asm-blackfin/ipipe_base.h
*
* Copyright (C) 2007 Philippe Gerum.
*
#define IPIPE_NR_XIRQS NR_IRQS
#define IPIPE_IRQ_ISHIFT 5 /* 2^5 for 32bits arch. */
-/* Blackfin-specific, global domain flags */
-#define IPIPE_ROOTLOCK_FLAG 1 /* Lock pipeline for root */
+/* Blackfin-specific, per-cpu pipeline status */
+#define IPIPE_SYNCDEFER_FLAG 15
+#define IPIPE_SYNCDEFER_MASK (1L << IPIPE_SYNCDEFER_MASK)
/* Blackfin traps -- i.e. exception vector numbers */
#define IPIPE_NR_FAULTS 52 /* We leave a gap after VEC_ILL_RES. */
#ifndef __ASSEMBLY__
-#include <linux/bitops.h>
-
-extern int test_bit(int nr, const void *addr);
-
-
extern unsigned long __ipipe_root_status; /* Alias to ipipe_root_cpudom_var(status) */
static inline void __ipipe_stall_root(void)
#define raw_irqs_disabled_flags(flags) (!irqs_enabled_from_flags_hw(flags))
#define local_test_iflag_hw(x) irqs_enabled_from_flags_hw(x)
-#define local_save_flags(x) \
- do { \
- (x) = __ipipe_test_root() ? \
+#define local_save_flags(x) \
+ do { \
+ (x) = __ipipe_test_root() ? \
__all_masked_irq_flags : bfin_irq_flags; \
+ barrier(); \
} while (0)
-#define local_irq_save(x) \
- do { \
- (x) = __ipipe_test_and_stall_root(); \
+#define local_irq_save(x) \
+ do { \
+ (x) = __ipipe_test_and_stall_root() ? \
+ __all_masked_irq_flags : bfin_irq_flags; \
+ barrier(); \
+ } while (0)
+
+static inline void local_irq_restore(unsigned long x)
+{
+ barrier();
+ __ipipe_restore_root(x == __all_masked_irq_flags);
+}
+
+#define local_irq_disable() \
+ do { \
+ __ipipe_stall_root(); \
+ barrier(); \
} while (0)
-#define local_irq_restore(x) __ipipe_restore_root(x)
-#define local_irq_disable() __ipipe_stall_root()
-#define local_irq_enable() __ipipe_unstall_root()
+static inline void local_irq_enable(void)
+{
+ barrier();
+ __ipipe_unstall_root();
+}
+
#define irqs_disabled() __ipipe_test_root()
#define local_save_flags_hw(x) \
#include <asm-generic/percpu.h>
-#ifdef CONFIG_MODULES
-#define PERCPU_MODULE_RESERVE 8192
-#else
-#define PERCPU_MODULE_RESERVE 0
-#endif
-
-#define PERCPU_ENOUGH_ROOM \
- (ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \
- PERCPU_MODULE_RESERVE)
-
#endif /* __ARCH_BLACKFIN_PERCPU__ */
#define TIF_MEMDIE 4
#define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */
#define TIF_FREEZE 6 /* is freezing for suspend */
+#define TIF_IRQ_SYNC 7 /* sync pipeline stage */
/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_FREEZE (1<<TIF_FREEZE)
+#define _TIF_IRQ_SYNC (1<<TIF_IRQ_SYNC)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
obj-y += time.o
endif
-CFLAGS_kgdb_test.o := -mlong-calls -O0
-
obj-$(CONFIG_IPIPE) += ipipe.o
obj-$(CONFIG_IPIPE_TRACE_MCOUNT) += mcount.o
obj-$(CONFIG_BFIN_GPTIMERS) += gptimers.o
obj-$(CONFIG_CPLB_INFO) += cplbinfo.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_KGDB) += kgdb.o
-obj-$(CONFIG_KGDB_TESTCASE) += kgdb_test.o
+obj-$(CONFIG_KGDB_TESTS) += kgdb_test.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+
+# the kgdb test puts code into L2 and without linker
+# relaxation, we need to force long calls to/from it
+CFLAGS_kgdb_test.o := -mlong-calls -O0
i_d = i_i = 0;
+#ifdef CONFIG_DEBUG_HUNT_FOR_ZERO
/* Set up the zero page. */
d_tbl[i_d].addr = 0;
d_tbl[i_d++].data = SDRAM_OOPS | PAGE_SIZE_1KB;
+ i_tbl[i_i].addr = 0;
+ i_tbl[i_i++].data = SDRAM_OOPS | PAGE_SIZE_1KB;
+#endif
/* Cover kernel memory with 4M pages. */
addr = 0;
#include <asm/atomic.h>
#include <asm/io.h>
-static int create_irq_threads;
-
DEFINE_PER_CPU(struct pt_regs, __ipipe_tick_regs);
-static DEFINE_PER_CPU(unsigned long, pending_irqthread_mask);
-
-static DEFINE_PER_CPU(int [IVG13 + 1], pending_irq_count);
-
asmlinkage void asm_do_IRQ(unsigned int irq, struct pt_regs *regs);
static void __ipipe_no_irqtail(void);
*/
void __ipipe_handle_irq(unsigned irq, struct pt_regs *regs)
{
+ struct ipipe_percpu_domain_data *p = ipipe_root_cpudom_ptr();
struct ipipe_domain *this_domain, *next_domain;
struct list_head *head, *pos;
int m_ack, s = -1;
* interrupt.
*/
m_ack = (regs == NULL || irq == IRQ_SYSTMR || irq == IRQ_CORETMR);
-
this_domain = ipipe_current_domain;
if (unlikely(test_bit(IPIPE_STICKY_FLAG, &this_domain->irqs[irq].control)))
next_domain = list_entry(head, struct ipipe_domain, p_link);
if (likely(test_bit(IPIPE_WIRED_FLAG, &next_domain->irqs[irq].control))) {
if (!m_ack && next_domain->irqs[irq].acknowledge != NULL)
- next_domain->irqs[irq].acknowledge(irq, irq_desc + irq);
- if (test_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags))
- s = __test_and_set_bit(IPIPE_STALL_FLAG,
- &ipipe_root_cpudom_var(status));
+ next_domain->irqs[irq].acknowledge(irq, irq_to_desc(irq));
+ if (test_bit(IPIPE_SYNCDEFER_FLAG, &p->status))
+ s = __test_and_set_bit(IPIPE_STALL_FLAG, &p->status);
__ipipe_dispatch_wired(next_domain, irq);
- goto finalize;
- return;
+ goto out;
}
}
/* Ack the interrupt. */
pos = head;
-
while (pos != &__ipipe_pipeline) {
next_domain = list_entry(pos, struct ipipe_domain, p_link);
- /*
- * For each domain handling the incoming IRQ, mark it
- * as pending in its log.
- */
if (test_bit(IPIPE_HANDLE_FLAG, &next_domain->irqs[irq].control)) {
- /*
- * Domains that handle this IRQ are polled for
- * acknowledging it by decreasing priority
- * order. The interrupt must be made pending
- * _first_ in the domain's status flags before
- * the PIC is unlocked.
- */
__ipipe_set_irq_pending(next_domain, irq);
-
if (!m_ack && next_domain->irqs[irq].acknowledge != NULL) {
- next_domain->irqs[irq].acknowledge(irq, irq_desc + irq);
+ next_domain->irqs[irq].acknowledge(irq, irq_to_desc(irq));
m_ack = 1;
}
}
-
- /*
- * If the domain does not want the IRQ to be passed
- * down the interrupt pipe, exit the loop now.
- */
if (!test_bit(IPIPE_PASS_FLAG, &next_domain->irqs[irq].control))
break;
-
pos = next_domain->p_link.next;
}
* immediately to the current domain if the interrupt has been
* marked as 'sticky'. This search does not go beyond the
* current domain in the pipeline. We also enforce the
- * additional root stage lock (blackfin-specific). */
+ * additional root stage lock (blackfin-specific).
+ */
+ if (test_bit(IPIPE_SYNCDEFER_FLAG, &p->status))
+ s = __test_and_set_bit(IPIPE_STALL_FLAG, &p->status);
- if (test_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags))
- s = __test_and_set_bit(IPIPE_STALL_FLAG,
- &ipipe_root_cpudom_var(status));
-finalize:
+ /*
+ * If the interrupt preempted the head domain, then do not
+ * even try to walk the pipeline, unless an interrupt is
+ * pending for it.
+ */
+ if (test_bit(IPIPE_AHEAD_FLAG, &this_domain->flags) &&
+ ipipe_head_cpudom_var(irqpend_himask) == 0)
+ goto out;
__ipipe_walk_pipeline(head);
-
+out:
if (!s)
- __clear_bit(IPIPE_STALL_FLAG,
- &ipipe_root_cpudom_var(status));
+ __clear_bit(IPIPE_STALL_FLAG, &p->status);
}
int __ipipe_check_root(void)
void __ipipe_enable_irqdesc(struct ipipe_domain *ipd, unsigned irq)
{
- struct irq_desc *desc = irq_desc + irq;
+ struct irq_desc *desc = irq_to_desc(irq);
int prio = desc->ic_prio;
desc->depth = 0;
void __ipipe_disable_irqdesc(struct ipipe_domain *ipd, unsigned irq)
{
- struct irq_desc *desc = irq_desc + irq;
+ struct irq_desc *desc = irq_to_desc(irq);
int prio = desc->ic_prio;
if (ipd != &ipipe_root &&
{
unsigned long flags;
- /* We need to run the IRQ tail hook whenever we don't
+ /*
+ * We need to run the IRQ tail hook whenever we don't
* propagate a syscall to higher domains, because we know that
* important operations might be pending there (e.g. Xenomai
- * deferred rescheduling). */
+ * deferred rescheduling).
+ */
- if (!__ipipe_syscall_watched_p(current, regs->orig_p0)) {
+ if (regs->orig_p0 < NR_syscalls) {
void (*hook)(void) = (void (*)(void))__ipipe_irq_tail_hook;
hook();
- return 0;
+ if ((current->flags & PF_EVNOTIFY) == 0)
+ return 0;
}
/*
{
unsigned long flags;
+#ifdef CONFIG_IPIPE_DEBUG
if (irq >= IPIPE_NR_IRQS ||
(ipipe_virtual_irq_p(irq)
&& !test_bit(irq - IPIPE_VIRQ_BASE, &__ipipe_virtual_irq_map)))
return -EINVAL;
+#endif
local_irq_save_hw(flags);
-
__ipipe_handle_irq(irq, NULL);
-
local_irq_restore_hw(flags);
return 1;
}
-/* Move Linux IRQ to threads. */
-
-static int do_irqd(void *__desc)
+asmlinkage void __ipipe_sync_root(void)
{
- struct irq_desc *desc = __desc;
- unsigned irq = desc - irq_desc;
- int thrprio = desc->thr_prio;
- int thrmask = 1 << thrprio;
- int cpu = smp_processor_id();
- cpumask_t cpumask;
-
- sigfillset(¤t->blocked);
- current->flags |= PF_NOFREEZE;
- cpumask = cpumask_of_cpu(cpu);
- set_cpus_allowed(current, cpumask);
- ipipe_setscheduler_root(current, SCHED_FIFO, 50 + thrprio);
-
- while (!kthread_should_stop()) {
- local_irq_disable();
- if (!(desc->status & IRQ_SCHEDULED)) {
- set_current_state(TASK_INTERRUPTIBLE);
-resched:
- local_irq_enable();
- schedule();
- local_irq_disable();
- }
- __set_current_state(TASK_RUNNING);
- /*
- * If higher priority interrupt servers are ready to
- * run, reschedule immediately. We need this for the
- * GPIO demux IRQ handler to unmask the interrupt line
- * _last_, after all GPIO IRQs have run.
- */
- if (per_cpu(pending_irqthread_mask, cpu) & ~(thrmask|(thrmask-1)))
- goto resched;
- if (--per_cpu(pending_irq_count[thrprio], cpu) == 0)
- per_cpu(pending_irqthread_mask, cpu) &= ~thrmask;
- desc->status &= ~IRQ_SCHEDULED;
- desc->thr_handler(irq, &__raw_get_cpu_var(__ipipe_tick_regs));
- local_irq_enable();
- }
- __set_current_state(TASK_RUNNING);
- return 0;
-}
+ unsigned long flags;
-static void kick_irqd(unsigned irq, void *cookie)
-{
- struct irq_desc *desc = irq_desc + irq;
- int thrprio = desc->thr_prio;
- int thrmask = 1 << thrprio;
- int cpu = smp_processor_id();
-
- if (!(desc->status & IRQ_SCHEDULED)) {
- desc->status |= IRQ_SCHEDULED;
- per_cpu(pending_irqthread_mask, cpu) |= thrmask;
- ++per_cpu(pending_irq_count[thrprio], cpu);
- wake_up_process(desc->thread);
- }
-}
+ BUG_ON(irqs_disabled());
-int ipipe_start_irq_thread(unsigned irq, struct irq_desc *desc)
-{
- if (desc->thread || !create_irq_threads)
- return 0;
-
- desc->thread = kthread_create(do_irqd, desc, "IRQ %d", irq);
- if (desc->thread == NULL) {
- printk(KERN_ERR "irqd: could not create IRQ thread %d!\n", irq);
- return -ENOMEM;
- }
+ local_irq_save_hw(flags);
- wake_up_process(desc->thread);
+ clear_thread_flag(TIF_IRQ_SYNC);
- desc->thr_handler = ipipe_root_domain->irqs[irq].handler;
- ipipe_root_domain->irqs[irq].handler = &kick_irqd;
+ if (ipipe_root_cpudom_var(irqpend_himask) != 0)
+ __ipipe_sync_pipeline(IPIPE_IRQMASK_ANY);
- return 0;
+ local_irq_restore_hw(flags);
}
-void __init ipipe_init_irq_threads(void)
+void ___ipipe_sync_pipeline(unsigned long syncmask)
{
- unsigned irq;
- struct irq_desc *desc;
-
- create_irq_threads = 1;
+ struct ipipe_domain *ipd = ipipe_current_domain;
- for (irq = 0; irq < NR_IRQS; irq++) {
- desc = irq_desc + irq;
- if (desc->action != NULL ||
- (desc->status & IRQ_NOREQUEST) != 0)
- ipipe_start_irq_thread(irq, desc);
+ if (ipd == ipipe_root_domain) {
+ if (test_bit(IPIPE_SYNCDEFER_FLAG, &ipipe_root_cpudom_var(status)))
+ return;
}
+
+ __ipipe_sync_stage(syncmask);
}
EXPORT_SYMBOL(show_stack);
#endif
generic_handle_irq(irq);
-#ifndef CONFIG_IPIPE /* Useless and bugous over the I-pipe: IRQs are threaded. */
- /* If we're the only interrupt running (ignoring IRQ15 which is for
- syscalls), lower our priority to IRQ14 so that softirqs run at
- that level. If there's another, lower-level interrupt, irq_exit
- will defer softirqs to that. */
+#ifndef CONFIG_IPIPE
+ /*
+ * If we're the only interrupt running (ignoring IRQ15 which
+ * is for syscalls), lower our priority to IRQ14 so that
+ * softirqs run at that level. If there's another,
+ * lower-level interrupt, irq_exit will defer softirqs to
+ * that. If the interrupt pipeline is enabled, we are already
+ * running at IRQ14 priority, so we don't need this code.
+ */
CSYNC();
pending = bfin_read_IPEND() & ~0x8000;
other_ints = pending & (pending - 1);
static char cmdline[256];
static unsigned long len;
+#ifndef CONFIG_SMP
static int num1 __attribute__((l1_data));
void kgdb_l1_test(void) __attribute__((l1_text));
printk(KERN_ALERT "L1(after change) : data variable addr = 0x%p, data value is %d\n", &num1, num1);
return ;
}
+#endif
+
#if L2_LENGTH
static int num2 __attribute__((l2));
static int test_proc_output(char *buf)
{
kgdb_test("hello world!", 12, 0x55, 0x10);
+#ifndef CONFIG_SMP
kgdb_l1_test();
- #if L2_LENGTH
+#endif
+#if L2_LENGTH
kgdb_l2_test();
- #endif
+#endif
return 0;
}
#include <asm/asm-offsets.h>
#include <asm/dma.h>
#include <asm/fixed_code.h>
+#include <asm/cacheflush.h>
#include <asm/mem_map.h>
#define TEXT_OFFSET 0
} else if (addr >= FIXED_CODE_START
&& addr + sizeof(tmp) <= FIXED_CODE_END) {
- memcpy(&tmp, (const void *)(addr), sizeof(tmp));
+ copy_from_user_page(0, 0, 0, &tmp, (const void *)(addr), sizeof(tmp));
copied = sizeof(tmp);
} else
} else if (addr >= FIXED_CODE_START
&& addr + sizeof(data) <= FIXED_CODE_END) {
- memcpy((void *)(addr), &data, sizeof(data));
+ copy_to_user_page(0, 0, 0, (void *)(addr), &data, sizeof(data));
copied = sizeof(data);
} else
CPU, bfin_revid());
}
+ /* We can't run on BF548-0.1 due to ANOMALY 05000448 */
+ if (bfin_cpuid() == 0x27de && bfin_revid() == 1)
+ panic("You can't run on this processor due to 05000448\n");
+
printk(KERN_INFO "Blackfin Linux support by http://blackfin.uclinux.org/\n");
printk(KERN_INFO "Processor Speed: %lu MHz core clock and %lu MHz System Clock\n",
icache_size = 0;
seq_printf(m, "cache size\t: %d KB(L1 icache) "
- "%d KB(L1 dcache-%s) %d KB(L2 cache)\n",
+ "%d KB(L1 dcache%s) %d KB(L2 cache)\n",
icache_size, dcache_size,
#if defined CONFIG_BFIN_WB
- "wb"
+ "-wb"
#elif defined CONFIG_BFIN_WT
- "wt"
+ "-wt"
#endif
"", 0);
write_seqlock(&xtime_lock);
#if defined(CONFIG_TICK_SOURCE_SYSTMR0) && !defined(CONFIG_IPIPE)
-/* FIXME: Here TIMIL0 is not set when IPIPE enabled, why? */
+ /*
+ * TIMIL0 is latched in __ipipe_grab_irq() when the I-Pipe is
+ * enabled.
+ */
if (get_gptimer_status(0) & TIMER_STATUS_TIMIL0) {
#endif
do_timer(1);
.name = "bfin_mac",
.dev.platform_data = &bfin_mii_bus,
};
-#endif
#if defined(CONFIG_NET_DSA_KSZ8893M) || defined(CONFIG_NET_DSA_KSZ8893M_MODULE)
static struct dsa_platform_data ksz8893m_switch_data = {
.dev.platform_data = &ksz8893m_switch_data,
};
#endif
+#endif
#if defined(CONFIG_MTD_M25P80) \
|| defined(CONFIG_MTD_M25P80_MODULE)
};
#endif
+#if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE)
#if defined(CONFIG_NET_DSA_KSZ8893M) \
|| defined(CONFIG_NET_DSA_KSZ8893M_MODULE)
/* SPI SWITCH CHIP */
.bits_per_word = 8,
};
#endif
+#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
},
#endif
+#if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE)
#if defined(CONFIG_NET_DSA_KSZ8893M) \
|| defined(CONFIG_NET_DSA_KSZ8893M_MODULE)
{
.mode = SPI_MODE_3,
},
#endif
+#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc_dummy",
+ .modalias = "mmc_spi",
.max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = 0,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
- {
- .modalias = "spi_mmc",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
#if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE)
&bfin_mii_bus,
&bfin_mac_device,
-#endif
-
#if defined(CONFIG_NET_DSA_KSZ8893M) || defined(CONFIG_NET_DSA_KSZ8893M_MODULE)
&ksz8893m_switch_device,
#endif
+#endif
#if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE)
&bfin_spi0_device,
* File: include/asm-blackfin/mach-bf518/anomaly.h
* Bugs: Enter bugs at http://blackfin.uclinux.org/
*
- * Copyright (C) 2004-2008 Analog Devices Inc.
+ * Copyright (C) 2004-2009 Analog Devices Inc.
* Licensed under the GPL-2 or later.
*/
/* This file shoule be up to date with:
- * - ????
+ * - Revision B, 02/03/2009; ADSP-BF512/BF514/BF516/BF518 Blackfin Processor Anomaly List
*/
#ifndef _MACH_ANOMALY_H_
#define ANOMALY_05000122 (1)
/* False Hardware Error from an Access in the Shadow of a Conditional Branch */
#define ANOMALY_05000245 (1)
+/* Incorrect Timer Pulse Width in Single-Shot PWM_OUT Mode with External Clock */
+#define ANOMALY_05000254 (1)
/* Sensitivity To Noise with Slow Input Edge Rates on External SPORT TX and RX Clocks */
#define ANOMALY_05000265 (1)
/* False Hardware Errors Caused by Fetches at the Boundary of Reserved Memory */
#define ANOMALY_05000443 (1)
/* Incorrect L1 Instruction Bank B Memory Map Location */
#define ANOMALY_05000444 (1)
+/* Incorrect Default Hysteresis Setting for RESET, NMI, and BMODE Signals */
+#define ANOMALY_05000452 (1)
+/* PWM_TRIPB Signal Not Available on PG10 */
+#define ANOMALY_05000453 (1)
+/* PPI_FS3 is Driven One Half Cycle Later Than PPI Data */
+#define ANOMALY_05000455 (1)
/* Anomalies that don't exist on this proc */
#define ANOMALY_05000125 (0)
#define ANOMALY_05000263 (0)
#define ANOMALY_05000266 (0)
#define ANOMALY_05000273 (0)
+#define ANOMALY_05000278 (0)
#define ANOMALY_05000285 (0)
+#define ANOMALY_05000305 (0)
#define ANOMALY_05000307 (0)
#define ANOMALY_05000311 (0)
#define ANOMALY_05000312 (0)
#define ANOMALY_05000323 (0)
#define ANOMALY_05000353 (0)
#define ANOMALY_05000363 (0)
+#define ANOMALY_05000380 (0)
#define ANOMALY_05000386 (0)
#define ANOMALY_05000412 (0)
#define ANOMALY_05000432 (0)
+#define ANOMALY_05000447 (0)
+#define ANOMALY_05000448 (0)
#endif
CH_UART0_TX,
CH_UART0_RX,
#endif
-#ifdef CONFIG_BFIN_UART0_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
CH_UART1_TX,
CH_UART1_RX,
#endif
-#ifdef CONFIG_BFIN_UART1_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
.controller_data = &ad9960_spi_chip_info,
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc_dummy",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 0,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
- {
- .modalias = "spi_mmc",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
+ .modalias = "mmc_spi",
+ .max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc_dummy",
+ .modalias = "mmc_spi",
.max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = 0,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
- {
- .modalias = "spi_mmc",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
* File: include/asm-blackfin/mach-bf527/anomaly.h
* Bugs: Enter bugs at http://blackfin.uclinux.org/
*
- * Copyright (C) 2004-2008 Analog Devices Inc.
+ * Copyright (C) 2004-2009 Analog Devices Inc.
* Licensed under the GPL-2 or later.
*/
#define ANOMALY_05000263 (0)
#define ANOMALY_05000266 (0)
#define ANOMALY_05000273 (0)
+#define ANOMALY_05000278 (0)
#define ANOMALY_05000285 (0)
+#define ANOMALY_05000305 (0)
#define ANOMALY_05000307 (0)
#define ANOMALY_05000311 (0)
#define ANOMALY_05000312 (0)
#define ANOMALY_05000323 (0)
#define ANOMALY_05000363 (0)
#define ANOMALY_05000412 (0)
+#define ANOMALY_05000447 (0)
+#define ANOMALY_05000448 (0)
#endif
CH_UART0_TX,
CH_UART0_RX,
#endif
-#ifdef CONFIG_BFIN_UART0_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
CH_UART1_TX,
CH_UART1_RX,
#endif
-#ifdef CONFIG_BFIN_UART1_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
help
Core support for IP04/IP04 open hardware IP-PBX.
-config GENERIC_BF533_BOARD
- bool "Generic"
- help
- Generic or Custom board support.
-
endchoice
# arch/blackfin/mach-bf533/boards/Makefile
#
-obj-$(CONFIG_GENERIC_BF533_BOARD) += generic_board.o
obj-$(CONFIG_BFIN533_STAMP) += stamp.o
obj-$(CONFIG_BFIN532_IP0X) += ip0x.o
obj-$(CONFIG_BFIN533_EZKIT) += ezkit.o
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
- {
- .modalias = "spi_mmc_dummy",
- .max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 0,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc",
+ .modalias = "mmc_spi",
.max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
- {
- .modalias = "spi_mmc_dummy",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 0,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc",
+ .modalias = "mmc_spi",
.max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
+++ /dev/null
-/*
- * File: arch/blackfin/mach-bf533/generic_board.c
- * Based on: arch/blackfin/mach-bf533/ezkit.c
- * Author: Aidan Williams <aidan@nicta.com.au>
- *
- * Created: 2005
- * Description:
- *
- * Modified:
- * Copyright 2005 National ICT Australia (NICTA)
- * Copyright 2004-2006 Analog Devices Inc.
- *
- * Bugs: Enter bugs at http://blackfin.uclinux.org/
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see the file COPYING, or write
- * to the Free Software Foundation, Inc.,
- * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include <linux/device.h>
-#include <linux/platform_device.h>
-#include <linux/irq.h>
-
-/*
- * Name the Board for the /proc/cpuinfo
- */
-const char bfin_board_name[] = "UNKNOWN BOARD";
-
-#if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
-static struct platform_device rtc_device = {
- .name = "rtc-bfin",
- .id = -1,
-};
-#endif
-
-/*
- * Driver needs to know address, irq and flag pin.
- */
-#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
-static struct resource smc91x_resources[] = {
- {
- .start = 0x20300300,
- .end = 0x20300300 + 16,
- .flags = IORESOURCE_MEM,
- }, {
- .start = IRQ_PROG_INTB,
- .end = IRQ_PROG_INTB,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- }, {
- .start = IRQ_PF7,
- .end = IRQ_PF7,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- },
-};
-
-static struct platform_device smc91x_device = {
- .name = "smc91x",
- .id = 0,
- .num_resources = ARRAY_SIZE(smc91x_resources),
- .resource = smc91x_resources,
-};
-#endif
-
-#if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE)
-#ifdef CONFIG_BFIN_SIR0
-static struct resource bfin_sir0_resources[] = {
- {
- .start = 0xFFC00400,
- .end = 0xFFC004FF,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = IRQ_UART0_RX,
- .end = IRQ_UART0_RX+1,
- .flags = IORESOURCE_IRQ,
- },
- {
- .start = CH_UART0_RX,
- .end = CH_UART0_RX+1,
- .flags = IORESOURCE_DMA,
- },
-};
-
-static struct platform_device bfin_sir0_device = {
- .name = "bfin_sir",
- .id = 0,
- .num_resources = ARRAY_SIZE(bfin_sir0_resources),
- .resource = bfin_sir0_resources,
-};
-#endif
-#endif
-
-static struct platform_device *generic_board_devices[] __initdata = {
-#if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
- &rtc_device,
-#endif
-
-#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
- &smc91x_device,
-#endif
-
-#if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE)
-#ifdef CONFIG_BFIN_SIR0
- &bfin_sir0_device,
-#endif
-#endif
-};
-
-static int __init generic_board_init(void)
-{
- printk(KERN_INFO "%s(): registering device resources\n", __func__);
- return platform_add_devices(generic_board_devices, ARRAY_SIZE(generic_board_devices));
-}
-
-arch_initcall(generic_board_init);
#if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE)
/* all SPI peripherals info goes here */
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
/*
* CPOL (Clock Polarity)
* 0 - Active high SCK
/* Notice: for blackfin, the speed_hz is the value of register
* SPI_BAUD, not the real baudrate */
static struct spi_board_info bfin_spi_board_info[] __initdata = {
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc",
+ .modalias = "mmc_spi",
.max_speed_hz = 2,
.bus_num = 1,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
},
#endif
};
* File: include/asm-blackfin/mach-bf533/anomaly.h
* Bugs: Enter bugs at http://blackfin.uclinux.org/
*
- * Copyright (C) 2004-2008 Analog Devices Inc.
+ * Copyright (C) 2004-2009 Analog Devices Inc.
* Licensed under the GPL-2 or later.
*/
#define ANOMALY_05000301 (__SILICON_REVISION__ < 6)
/* SSYNCs After Writes To DMA MMR Registers May Not Be Handled Correctly */
#define ANOMALY_05000302 (__SILICON_REVISION__ < 5)
-/* New Feature: Additional Hysteresis on SPORT Input Pins (Not Available On Older Silicon) */
+/* SPORT_HYS Bit in PLL_CTL Register Is Not Functional */
#define ANOMALY_05000305 (__SILICON_REVISION__ < 5)
/* New Feature: Additional PPI Frame Sync Sampling Options (Not Available On Older Silicon) */
#define ANOMALY_05000306 (__SILICON_REVISION__ < 5)
#define ANOMALY_05000266 (0)
#define ANOMALY_05000323 (0)
#define ANOMALY_05000353 (1)
+#define ANOMALY_05000380 (0)
#define ANOMALY_05000386 (1)
#define ANOMALY_05000412 (0)
#define ANOMALY_05000432 (0)
#define ANOMALY_05000435 (0)
+#define ANOMALY_05000447 (0)
+#define ANOMALY_05000448 (0)
#endif
CH_UART_TX,
CH_UART_RX,
#endif
-#ifdef CONFIG_BFIN_UART0_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
help
Board supply package for CSP Minotaur
-config GENERIC_BF537_BOARD
- bool "Generic"
- help
- Generic or Custom board support.
-
endchoice
# arch/blackfin/mach-bf537/boards/Makefile
#
-obj-$(CONFIG_GENERIC_BF537_BOARD) += generic_board.o
obj-$(CONFIG_BFIN537_STAMP) += stamp.o
obj-$(CONFIG_BFIN537_BLUETECHNIX_CM) += cm_bf537.o
obj-$(CONFIG_BFIN537_BLUETECHNIX_TCM) += tcm_bf537.o
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
- {
- .modalias = "spi_mmc_dummy",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 7,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
+ .modalias = "mmc_spi",
+ .max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 1,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
+++ /dev/null
-/*
- * File: arch/blackfin/mach-bf537/boards/generic_board.c
- * Based on: arch/blackfin/mach-bf533/boards/ezkit.c
- * Author: Aidan Williams <aidan@nicta.com.au>
- *
- * Created:
- * Description:
- *
- * Modified:
- * Copyright 2005 National ICT Australia (NICTA)
- * Copyright 2004-2008 Analog Devices Inc.
- *
- * Bugs: Enter bugs at http://blackfin.uclinux.org/
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see the file COPYING, or write
- * to the Free Software Foundation, Inc.,
- * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include <linux/device.h>
-#include <linux/etherdevice.h>
-#include <linux/platform_device.h>
-#include <linux/mtd/mtd.h>
-#include <linux/mtd/partitions.h>
-#include <linux/spi/spi.h>
-#include <linux/spi/flash.h>
-#if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE)
-#include <linux/usb/isp1362.h>
-#endif
-#include <linux/irq.h>
-#include <linux/interrupt.h>
-#include <linux/usb/sl811.h>
-#include <asm/dma.h>
-#include <asm/bfin5xx_spi.h>
-#include <asm/reboot.h>
-#include <asm/portmux.h>
-#include <linux/spi/ad7877.h>
-
-/*
- * Name the Board for the /proc/cpuinfo
- */
-const char bfin_board_name[] = "UNKNOWN BOARD";
-
-/*
- * Driver needs to know address, irq and flag pin.
- */
-
-#if defined(CONFIG_USB_ISP1760_HCD) || defined(CONFIG_USB_ISP1760_HCD_MODULE)
-#include <linux/usb/isp1760.h>
-static struct resource bfin_isp1760_resources[] = {
- [0] = {
- .start = 0x203C0000,
- .end = 0x203C0000 + 0x000fffff,
- .flags = IORESOURCE_MEM,
- },
- [1] = {
- .start = IRQ_PF7,
- .end = IRQ_PF7,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-static struct isp1760_platform_data isp1760_priv = {
- .is_isp1761 = 0,
- .port1_disable = 0,
- .bus_width_16 = 1,
- .port1_otg = 0,
- .analog_oc = 0,
- .dack_polarity_high = 0,
- .dreq_polarity_high = 0,
-};
-
-static struct platform_device bfin_isp1760_device = {
- .name = "isp1760-hcd",
- .id = 0,
- .dev = {
- .platform_data = &isp1760_priv,
- },
- .num_resources = ARRAY_SIZE(bfin_isp1760_resources),
- .resource = bfin_isp1760_resources,
-};
-#endif
-
-#if defined(CONFIG_BFIN_CFPCMCIA) || defined(CONFIG_BFIN_CFPCMCIA_MODULE)
-static struct resource bfin_pcmcia_cf_resources[] = {
- {
- .start = 0x20310000, /* IO PORT */
- .end = 0x20312000,
- .flags = IORESOURCE_MEM,
- }, {
- .start = 0x20311000, /* Attribute Memory */
- .end = 0x20311FFF,
- .flags = IORESOURCE_MEM,
- }, {
- .start = IRQ_PF4,
- .end = IRQ_PF4,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_LOWLEVEL,
- }, {
- .start = 6, /* Card Detect PF6 */
- .end = 6,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-static struct platform_device bfin_pcmcia_cf_device = {
- .name = "bfin_cf_pcmcia",
- .id = -1,
- .num_resources = ARRAY_SIZE(bfin_pcmcia_cf_resources),
- .resource = bfin_pcmcia_cf_resources,
-};
-#endif
-
-#if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
-static struct platform_device rtc_device = {
- .name = "rtc-bfin",
- .id = -1,
-};
-#endif
-
-#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
-static struct resource smc91x_resources[] = {
- {
- .name = "smc91x-regs",
- .start = 0x20300300,
- .end = 0x20300300 + 16,
- .flags = IORESOURCE_MEM,
- }, {
-
- .start = IRQ_PF7,
- .end = IRQ_PF7,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- },
-};
-static struct platform_device smc91x_device = {
- .name = "smc91x",
- .id = 0,
- .num_resources = ARRAY_SIZE(smc91x_resources),
- .resource = smc91x_resources,
-};
-#endif
-
-#if defined(CONFIG_DM9000) || defined(CONFIG_DM9000_MODULE)
-static struct resource dm9000_resources[] = {
- [0] = {
- .start = 0x203FB800,
- .end = 0x203FB800 + 1,
- .flags = IORESOURCE_MEM,
- },
- [1] = {
- .start = 0x203FB800 + 4,
- .end = 0x203FB800 + 5,
- .flags = IORESOURCE_MEM,
- },
- [2] = {
- .start = IRQ_PF9,
- .end = IRQ_PF9,
- .flags = (IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE),
- },
-};
-
-static struct platform_device dm9000_device = {
- .name = "dm9000",
- .id = -1,
- .num_resources = ARRAY_SIZE(dm9000_resources),
- .resource = dm9000_resources,
-};
-#endif
-
-#if defined(CONFIG_USB_SL811_HCD) || defined(CONFIG_USB_SL811_HCD_MODULE)
-static struct resource sl811_hcd_resources[] = {
- {
- .start = 0x20340000,
- .end = 0x20340000,
- .flags = IORESOURCE_MEM,
- }, {
- .start = 0x20340004,
- .end = 0x20340004,
- .flags = IORESOURCE_MEM,
- }, {
- .start = CONFIG_USB_SL811_BFIN_IRQ,
- .end = CONFIG_USB_SL811_BFIN_IRQ,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- },
-};
-
-#if defined(CONFIG_USB_SL811_BFIN_USE_VBUS)
-void sl811_port_power(struct device *dev, int is_on)
-{
- gpio_request(CONFIG_USB_SL811_BFIN_GPIO_VBUS, "usb:SL811_VBUS");
- gpio_direction_output(CONFIG_USB_SL811_BFIN_GPIO_VBUS, is_on);
-
-}
-#endif
-
-static struct sl811_platform_data sl811_priv = {
- .potpg = 10,
- .power = 250, /* == 500mA */
-#if defined(CONFIG_USB_SL811_BFIN_USE_VBUS)
- .port_power = &sl811_port_power,
-#endif
-};
-
-static struct platform_device sl811_hcd_device = {
- .name = "sl811-hcd",
- .id = 0,
- .dev = {
- .platform_data = &sl811_priv,
- },
- .num_resources = ARRAY_SIZE(sl811_hcd_resources),
- .resource = sl811_hcd_resources,
-};
-#endif
-
-#if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE)
-static struct resource isp1362_hcd_resources[] = {
- {
- .start = 0x20360000,
- .end = 0x20360000,
- .flags = IORESOURCE_MEM,
- }, {
- .start = 0x20360004,
- .end = 0x20360004,
- .flags = IORESOURCE_MEM,
- }, {
- .start = CONFIG_USB_ISP1362_BFIN_GPIO_IRQ,
- .end = CONFIG_USB_ISP1362_BFIN_GPIO_IRQ,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- },
-};
-
-static struct isp1362_platform_data isp1362_priv = {
- .sel15Kres = 1,
- .clknotstop = 0,
- .oc_enable = 0,
- .int_act_high = 0,
- .int_edge_triggered = 0,
- .remote_wakeup_connected = 0,
- .no_power_switching = 1,
- .power_switching_mode = 0,
-};
-
-static struct platform_device isp1362_hcd_device = {
- .name = "isp1362-hcd",
- .id = 0,
- .dev = {
- .platform_data = &isp1362_priv,
- },
- .num_resources = ARRAY_SIZE(isp1362_hcd_resources),
- .resource = isp1362_hcd_resources,
-};
-#endif
-
-#if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE)
-static struct platform_device bfin_mii_bus = {
- .name = "bfin_mii_bus",
-};
-
-static struct platform_device bfin_mac_device = {
- .name = "bfin_mac",
- .dev.platform_data = &bfin_mii_bus,
-};
-#endif
-
-#if defined(CONFIG_USB_NET2272) || defined(CONFIG_USB_NET2272_MODULE)
-static struct resource net2272_bfin_resources[] = {
- {
- .start = 0x20300000,
- .end = 0x20300000 + 0x100,
- .flags = IORESOURCE_MEM,
- }, {
- .start = IRQ_PF7,
- .end = IRQ_PF7,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- },
-};
-
-static struct platform_device net2272_bfin_device = {
- .name = "net2272",
- .id = -1,
- .num_resources = ARRAY_SIZE(net2272_bfin_resources),
- .resource = net2272_bfin_resources,
-};
-#endif
-
-#if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE)
-/* all SPI peripherals info goes here */
-
-#if defined(CONFIG_MTD_M25P80) \
- || defined(CONFIG_MTD_M25P80_MODULE)
-static struct mtd_partition bfin_spi_flash_partitions[] = {
- {
- .name = "bootloader(spi)",
- .size = 0x00020000,
- .offset = 0,
- .mask_flags = MTD_CAP_ROM
- }, {
- .name = "linux kernel(spi)",
- .size = 0xe0000,
- .offset = 0x20000
- }, {
- .name = "file system(spi)",
- .size = 0x700000,
- .offset = 0x00100000,
- }
-};
-
-static struct flash_platform_data bfin_spi_flash_data = {
- .name = "m25p80",
- .parts = bfin_spi_flash_partitions,
- .nr_parts = ARRAY_SIZE(bfin_spi_flash_partitions),
- .type = "m25p64",
-};
-
-/* SPI flash chip (m25p64) */
-static struct bfin5xx_spi_chip spi_flash_chip_info = {
- .enable_dma = 0, /* use dma transfer with this chip*/
- .bits_per_word = 8,
-};
-#endif
-
-#if defined(CONFIG_SPI_ADC_BF533) \
- || defined(CONFIG_SPI_ADC_BF533_MODULE)
-/* SPI ADC chip */
-static struct bfin5xx_spi_chip spi_adc_chip_info = {
- .enable_dma = 1, /* use dma transfer with this chip*/
- .bits_per_word = 16,
-};
-#endif
-
-#if defined(CONFIG_SND_BLACKFIN_AD1836) \
- || defined(CONFIG_SND_BLACKFIN_AD1836_MODULE)
-static struct bfin5xx_spi_chip ad1836_spi_chip_info = {
- .enable_dma = 0,
- .bits_per_word = 16,
-};
-#endif
-
-#if defined(CONFIG_AD9960) || defined(CONFIG_AD9960_MODULE)
-static struct bfin5xx_spi_chip ad9960_spi_chip_info = {
- .enable_dma = 0,
- .bits_per_word = 16,
-};
-#endif
-
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
- .bits_per_word = 8,
-};
-#endif
-
-#if defined(CONFIG_PBX)
-static struct bfin5xx_spi_chip spi_si3xxx_chip_info = {
- .ctl_reg = 0x4, /* send zero */
- .enable_dma = 0,
- .bits_per_word = 8,
- .cs_change_per_word = 1,
-};
-#endif
-
-#if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
-static struct bfin5xx_spi_chip spi_ad7877_chip_info = {
- .enable_dma = 0,
- .bits_per_word = 16,
-};
-
-static const struct ad7877_platform_data bfin_ad7877_ts_info = {
- .model = 7877,
- .vref_delay_usecs = 50, /* internal, no capacitor */
- .x_plate_ohms = 419,
- .y_plate_ohms = 486,
- .pressure_max = 1000,
- .pressure_min = 0,
- .stopacq_polarity = 1,
- .first_conversion_delay = 3,
- .acquisition_time = 1,
- .averaging = 1,
- .pen_down_acc_interval = 1,
-};
-#endif
-
-static struct spi_board_info bfin_spi_board_info[] __initdata = {
-#if defined(CONFIG_MTD_M25P80) \
- || defined(CONFIG_MTD_M25P80_MODULE)
- {
- /* the modalias must be the same as spi device driver name */
- .modalias = "m25p80", /* Name of spi_driver for this device */
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0, /* Framework bus number */
- .chip_select = 1, /* Framework chip select. On STAMP537 it is SPISSEL1*/
- .platform_data = &bfin_spi_flash_data,
- .controller_data = &spi_flash_chip_info,
- .mode = SPI_MODE_3,
- },
-#endif
-
-#if defined(CONFIG_SPI_ADC_BF533) \
- || defined(CONFIG_SPI_ADC_BF533_MODULE)
- {
- .modalias = "bfin_spi_adc", /* Name of spi_driver for this device */
- .max_speed_hz = 6250000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0, /* Framework bus number */
- .chip_select = 1, /* Framework chip select. */
- .platform_data = NULL, /* No spi_driver specific config */
- .controller_data = &spi_adc_chip_info,
- },
-#endif
-
-#if defined(CONFIG_SND_BLACKFIN_AD1836) \
- || defined(CONFIG_SND_BLACKFIN_AD1836_MODULE)
- {
- .modalias = "ad1836-spi",
- .max_speed_hz = 3125000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = CONFIG_SND_BLACKFIN_SPI_PFBIT,
- .controller_data = &ad1836_spi_chip_info,
- },
-#endif
-#if defined(CONFIG_AD9960) || defined(CONFIG_AD9960_MODULE)
- {
- .modalias = "ad9960-spi",
- .max_speed_hz = 10000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 1,
- .controller_data = &ad9960_spi_chip_info,
- },
-#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
- {
- .modalias = "spi_mmc_dummy",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 0,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
- {
- .modalias = "spi_mmc",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
-#endif
-#if defined(CONFIG_PBX)
- {
- .modalias = "fxs-spi",
- .max_speed_hz = 12500000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 8 - CONFIG_J11_JUMPER,
- .controller_data = &spi_si3xxx_chip_info,
- .mode = SPI_MODE_3,
- },
- {
- .modalias = "fxo-spi",
- .max_speed_hz = 12500000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 8 - CONFIG_J19_JUMPER,
- .controller_data = &spi_si3xxx_chip_info,
- .mode = SPI_MODE_3,
- },
-#endif
-#if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
- {
- .modalias = "ad7877",
- .platform_data = &bfin_ad7877_ts_info,
- .irq = IRQ_PF6,
- .max_speed_hz = 12500000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 1,
- .controller_data = &spi_ad7877_chip_info,
- },
-#endif
-};
-
-/* SPI controller data */
-static struct bfin5xx_spi_master bfin_spi0_info = {
- .num_chipselect = 8,
- .enable_dma = 1, /* master has the ability to do dma transfer */
- .pin_req = {P_SPI0_SCK, P_SPI0_MISO, P_SPI0_MOSI, 0},
-};
-
-/* SPI (0) */
-static struct resource bfin_spi0_resource[] = {
- [0] = {
- .start = SPI0_REGBASE,
- .end = SPI0_REGBASE + 0xFF,
- .flags = IORESOURCE_MEM,
- },
- [1] = {
- .start = CH_SPI,
- .end = CH_SPI,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-static struct platform_device bfin_spi0_device = {
- .name = "bfin-spi",
- .id = 0, /* Bus number */
- .num_resources = ARRAY_SIZE(bfin_spi0_resource),
- .resource = bfin_spi0_resource,
- .dev = {
- .platform_data = &bfin_spi0_info, /* Passed to driver */
- },
-};
-#endif /* spi master and devices */
-
-#if defined(CONFIG_FB_BF537_LQ035) || defined(CONFIG_FB_BF537_LQ035_MODULE)
-static struct platform_device bfin_fb_device = {
- .name = "bf537-lq035",
-};
-#endif
-
-#if defined(CONFIG_FB_BFIN_7393) || defined(CONFIG_FB_BFIN_7393_MODULE)
-static struct platform_device bfin_fb_adv7393_device = {
- .name = "bfin-adv7393",
-};
-#endif
-
-#if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE)
-static struct resource bfin_uart_resources[] = {
- {
- .start = 0xFFC00400,
- .end = 0xFFC004FF,
- .flags = IORESOURCE_MEM,
- }, {
- .start = 0xFFC02000,
- .end = 0xFFC020FF,
- .flags = IORESOURCE_MEM,
- },
-};
-
-static struct platform_device bfin_uart_device = {
- .name = "bfin-uart",
- .id = 1,
- .num_resources = ARRAY_SIZE(bfin_uart_resources),
- .resource = bfin_uart_resources,
-};
-#endif
-
-#if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE)
-#ifdef CONFIG_BFIN_SIR0
-static struct resource bfin_sir0_resources[] = {
- {
- .start = 0xFFC00400,
- .end = 0xFFC004FF,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = IRQ_UART0_RX,
- .end = IRQ_UART0_RX+1,
- .flags = IORESOURCE_IRQ,
- },
- {
- .start = CH_UART0_RX,
- .end = CH_UART0_RX+1,
- .flags = IORESOURCE_DMA,
- },
-};
-
-static struct platform_device bfin_sir0_device = {
- .name = "bfin_sir",
- .id = 0,
- .num_resources = ARRAY_SIZE(bfin_sir0_resources),
- .resource = bfin_sir0_resources,
-};
-#endif
-#ifdef CONFIG_BFIN_SIR1
-static struct resource bfin_sir1_resources[] = {
- {
- .start = 0xFFC02000,
- .end = 0xFFC020FF,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = IRQ_UART1_RX,
- .end = IRQ_UART1_RX+1,
- .flags = IORESOURCE_IRQ,
- },
- {
- .start = CH_UART1_RX,
- .end = CH_UART1_RX+1,
- .flags = IORESOURCE_DMA,
- },
-};
-
-static struct platform_device bfin_sir1_device = {
- .name = "bfin_sir",
- .id = 1,
- .num_resources = ARRAY_SIZE(bfin_sir1_resources),
- .resource = bfin_sir1_resources,
-};
-#endif
-#endif
-
-#if defined(CONFIG_I2C_BLACKFIN_TWI) || defined(CONFIG_I2C_BLACKFIN_TWI_MODULE)
-static struct resource bfin_twi0_resource[] = {
- [0] = {
- .start = TWI0_REGBASE,
- .end = TWI0_REGBASE + 0xFF,
- .flags = IORESOURCE_MEM,
- },
- [1] = {
- .start = IRQ_TWI,
- .end = IRQ_TWI,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-static struct platform_device i2c_bfin_twi_device = {
- .name = "i2c-bfin-twi",
- .id = 0,
- .num_resources = ARRAY_SIZE(bfin_twi0_resource),
- .resource = bfin_twi0_resource,
-};
-#endif
-
-#if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE)
-static struct platform_device bfin_sport0_uart_device = {
- .name = "bfin-sport-uart",
- .id = 0,
-};
-
-static struct platform_device bfin_sport1_uart_device = {
- .name = "bfin-sport-uart",
- .id = 1,
-};
-#endif
-
-static struct platform_device *stamp_devices[] __initdata = {
-#if defined(CONFIG_BFIN_CFPCMCIA) || defined(CONFIG_BFIN_CFPCMCIA_MODULE)
- &bfin_pcmcia_cf_device,
-#endif
-
-#if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
- &rtc_device,
-#endif
-
-#if defined(CONFIG_USB_SL811_HCD) || defined(CONFIG_USB_SL811_HCD_MODULE)
- &sl811_hcd_device,
-#endif
-
-#if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE)
- &isp1362_hcd_device,
-#endif
-
-#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
- &smc91x_device,
-#endif
-
-#if defined(CONFIG_DM9000) || defined(CONFIG_DM9000_MODULE)
- &dm9000_device,
-#endif
-
-#if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE)
- &bfin_mii_bus,
- &bfin_mac_device,
-#endif
-
-#if defined(CONFIG_USB_NET2272) || defined(CONFIG_USB_NET2272_MODULE)
- &net2272_bfin_device,
-#endif
-
-#if defined(CONFIG_USB_ISP1760_HCD) || defined(CONFIG_USB_ISP1760_HCD_MODULE)
- &bfin_isp1760_device,
-#endif
-
-#if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE)
- &bfin_spi0_device,
-#endif
-
-#if defined(CONFIG_FB_BF537_LQ035) || defined(CONFIG_FB_BF537_LQ035_MODULE)
- &bfin_fb_device,
-#endif
-
-#if defined(CONFIG_FB_BFIN_7393) || defined(CONFIG_FB_BFIN_7393_MODULE)
- &bfin_fb_adv7393_device,
-#endif
-
-#if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE)
- &bfin_uart_device,
-#endif
-
-#if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE)
-#ifdef CONFIG_BFIN_SIR0
- &bfin_sir0_device,
-#endif
-#ifdef CONFIG_BFIN_SIR1
- &bfin_sir1_device,
-#endif
-#endif
-
-#if defined(CONFIG_I2C_BLACKFIN_TWI) || defined(CONFIG_I2C_BLACKFIN_TWI_MODULE)
- &i2c_bfin_twi_device,
-#endif
-
-#if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE)
- &bfin_sport0_uart_device,
- &bfin_sport1_uart_device,
-#endif
-};
-
-static int __init generic_init(void)
-{
- printk(KERN_INFO "%s(): registering device resources\n", __func__);
- platform_add_devices(stamp_devices, ARRAY_SIZE(stamp_devices));
-#if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE)
- spi_register_board_info(bfin_spi_board_info,
- ARRAY_SIZE(bfin_spi_board_info));
-#endif
-
- return 0;
-}
-
-arch_initcall(generic_init);
-
-void native_machine_restart(char *cmd)
-{
- /* workaround reboot hang when booting from SPI */
- if ((bfin_read_SYSCR() & 0x7) == 0x3)
- bfin_reset_boot_spi_cs(P_DEFAULT_BOOT_SPI_CS);
-}
-
-#if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE)
-void bfin_get_ether_addr(char *addr)
-{
- random_ether_addr(addr);
- printk(KERN_WARNING "%s:%s: Setting Ethernet MAC to a random one\n", __FILE__, __func__);
-}
-EXPORT_SYMBOL(bfin_get_ether_addr);
-#endif
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc_dummy",
+ .modalias = "mmc_spi",
.max_speed_hz = 5000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = 0,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
- {
- .modalias = "spi_mmc",
- .max_speed_hz = 5000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
.controller_data = &ad9960_spi_chip_info,
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
- {
- .modalias = "spi_mmc_dummy",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 7,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc",
+ .modalias = "mmc_spi",
.max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
- {
- .modalias = "spi_mmc_dummy",
- .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
- .bus_num = 0,
- .chip_select = 7,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
- .mode = SPI_MODE_3,
- },
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc",
+ .modalias = "mmc_spi",
.max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
* File: include/asm-blackfin/mach-bf537/anomaly.h
* Bugs: Enter bugs at http://blackfin.uclinux.org/
*
- * Copyright (C) 2004-2008 Analog Devices Inc.
+ * Copyright (C) 2004-2009 Analog Devices Inc.
* Licensed under the GPL-2 or later.
*/
#define ANOMALY_05000301 (1)
/* SSYNCs After Writes To CAN/DMA MMR Registers Are Not Always Handled Correctly */
#define ANOMALY_05000304 (__SILICON_REVISION__ < 3)
-/* New Feature: Additional Hysteresis on SPORT Input Pins (Not Available On Older Silicon) */
+/* SPORT_HYS Bit in PLL_CTL Register Is Not Functional */
#define ANOMALY_05000305 (__SILICON_REVISION__ < 3)
/* SCKELOW Bit Does Not Maintain State Through Hibernate */
#define ANOMALY_05000307 (__SILICON_REVISION__ < 3)
#define ANOMALY_05000323 (0)
#define ANOMALY_05000353 (1)
#define ANOMALY_05000363 (0)
+#define ANOMALY_05000380 (0)
#define ANOMALY_05000386 (1)
#define ANOMALY_05000412 (0)
#define ANOMALY_05000432 (0)
#define ANOMALY_05000435 (0)
+#define ANOMALY_05000447 (0)
+#define ANOMALY_05000448 (0)
#endif
CH_UART0_TX,
CH_UART0_RX,
#endif
-#ifdef CONFIG_BFIN_UART0_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
CH_UART1_TX,
CH_UART1_RX,
#endif
-#ifdef CONFIG_BFIN_UART1_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
* File: include/asm-blackfin/mach-bf538/anomaly.h
* Bugs: Enter bugs at http://blackfin.uclinux.org/
*
- * Copyright (C) 2004-2008 Analog Devices Inc.
+ * Copyright (C) 2004-2009 Analog Devices Inc.
* Licensed under the GPL-2 or later.
*/
#define ANOMALY_05000198 (0)
#define ANOMALY_05000230 (0)
#define ANOMALY_05000263 (0)
+#define ANOMALY_05000305 (0)
#define ANOMALY_05000311 (0)
#define ANOMALY_05000323 (0)
#define ANOMALY_05000353 (1)
#define ANOMALY_05000363 (0)
+#define ANOMALY_05000380 (0)
#define ANOMALY_05000386 (1)
#define ANOMALY_05000412 (0)
#define ANOMALY_05000432 (0)
#define ANOMALY_05000435 (0)
+#define ANOMALY_05000447 (0)
+#define ANOMALY_05000448 (0)
#endif
CH_UART0_TX,
CH_UART0_RX,
#endif
-#ifdef CONFIG_BFIN_UART0_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
CH_UART1_TX,
CH_UART1_RX,
#endif
-#ifdef CONFIG_BFIN_UART1_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
* File: include/asm-blackfin/mach-bf548/anomaly.h
* Bugs: Enter bugs at http://blackfin.uclinux.org/
*
- * Copyright (C) 2004-2008 Analog Devices Inc.
+ * Copyright (C) 2004-2009 Analog Devices Inc.
* Licensed under the GPL-2 or later.
*/
/* This file shoule be up to date with:
- * - Revision G, 08/07/2008; ADSP-BF542/BF544/BF547/BF548/BF549 Blackfin Processor Anomaly List
+ * - Revision H, 01/16/2009; ADSP-BF542/BF544/BF547/BF548/BF549 Blackfin Processor Anomaly List
*/
#ifndef _MACH_ANOMALY_H_
#define ANOMALY_05000371 (__SILICON_REVISION__ < 2)
/* USB DP/DM Data Pins May Lose State When Entering Hibernate */
#define ANOMALY_05000372 (__SILICON_REVISION__ < 1)
-/* Mobile DDR Operation Not Functional */
-#define ANOMALY_05000377 (1)
/* Security/Authentication Speedpath Causes Authentication To Fail To Initiate */
#define ANOMALY_05000378 (__SILICON_REVISION__ < 2)
/* 16-Bit NAND FLASH Boot Mode Is Not Functional */
#define ANOMALY_05000429 (__SILICON_REVISION__ < 2)
/* Software System Reset Corrupts PLL_LOCKCNT Register */
#define ANOMALY_05000430 (__SILICON_REVISION__ >= 2)
+/* Incorrect Use of Stack in Lockbox Firmware During Authentication */
+#define ANOMALY_05000431 (__SILICON_REVISION__ < 3)
+/* OTP Write Accesses Not Supported */
+#define ANOMALY_05000442 (__SILICON_REVISION__ < 1)
/* IFLUSH Instruction at End of Hardware Loop Causes Infinite Stall */
#define ANOMALY_05000443 (1)
+/* CDMAPRIO and L2DMAPRIO Bits in the SYSCR Register Are Not Functional */
+#define ANOMALY_05000446 (1)
+/* UART IrDA Receiver Fails on Extended Bit Pulses */
+#define ANOMALY_05000447 (1)
+/* DDR Clock Duty Cycle Spec Violation (tCH, tCL) */
+#define ANOMALY_05000448 (__SILICON_REVISION__ == 1)
+/* Reduced Timing Margins on DDR Output Setup and Hold (tDS and tDH) */
+#define ANOMALY_05000449 (__SILICON_REVISION__ == 1)
+/* USB DMA Mode 1 Short Packet Data Corruption */
+#define ANOMALY_05000450 (1
/* Anomalies that don't exist on this proc */
#define ANOMALY_05000125 (0)
#define ANOMALY_05000263 (0)
#define ANOMALY_05000266 (0)
#define ANOMALY_05000273 (0)
+#define ANOMALY_05000278 (0)
+#define ANOMALY_05000305 (0)
#define ANOMALY_05000307 (0)
#define ANOMALY_05000311 (0)
#define ANOMALY_05000323 (0)
#define UART_ENABLE_INTS(x, v) UART_SET_IER(x, v)
#define UART_DISABLE_INTS(x) UART_CLEAR_IER(x, 0xF)
-#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
+#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART2_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
# define CONFIG_UART0_RTS_PIN -1
# endif
-# ifndef CONFIG_UART1_CTS_PIN
-# define CONFIG_UART1_CTS_PIN -1
+# ifndef CONFIG_UART2_CTS_PIN
+# define CONFIG_UART2_CTS_PIN -1
# endif
-# ifndef CONFIG_UART1_RTS_PIN
-# define CONFIG_UART1_RTS_PIN -1
+# ifndef CONFIG_UART2_RTS_PIN
+# define CONFIG_UART2_RTS_PIN -1
# endif
#endif
CH_UART0_TX,
CH_UART0_RX,
#endif
-#ifdef CONFIG_BFIN_UART0_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART1_TX,
CH_UART1_RX,
+#endif
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
+ 0,
+ 0,
#endif
},
#endif
CH_UART2_TX,
CH_UART2_RX,
#endif
-#ifdef CONFIG_BFIN_UART2_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART2_CTS_PIN,
CONFIG_UART2_RTS_PIN,
#endif
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART3_TX,
CH_UART3_RX,
+#endif
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
+ 0,
+ 0,
#endif
},
#endif
#define IRQ_MXVR_ERROR BFIN_IRQ(51) /* MXVR Status (Error) Interrupt */
#define IRQ_MXVR_MSG BFIN_IRQ(52) /* MXVR Message Interrupt */
#define IRQ_MXVR_PKT BFIN_IRQ(53) /* MXVR Packet Interrupt */
-#define IRQ_EPP1_ERROR BFIN_IRQ(54) /* EPPI1 Error Interrupt */
-#define IRQ_EPP2_ERROR BFIN_IRQ(55) /* EPPI2 Error Interrupt */
+#define IRQ_EPPI1_ERROR BFIN_IRQ(54) /* EPPI1 Error Interrupt */
+#define IRQ_EPPI2_ERROR BFIN_IRQ(55) /* EPPI2 Error Interrupt */
#define IRQ_UART3_ERROR BFIN_IRQ(56) /* UART3 Status (Error) Interrupt */
#define IRQ_HOST_ERROR BFIN_IRQ(57) /* HOST Status (Error) Interrupt */
#define IRQ_PIXC_ERROR BFIN_IRQ(59) /* PIXC Status (Error) Interrupt */
#define IRQ_UART2_ERR IRQ_UART2_ERROR
#define IRQ_CAN0_ERR IRQ_CAN0_ERROR
#define IRQ_MXVR_ERR IRQ_MXVR_ERROR
-#define IRQ_EPP1_ERR IRQ_EPP1_ERROR
-#define IRQ_EPP2_ERR IRQ_EPP2_ERROR
+#define IRQ_EPPI1_ERR IRQ_EPPI1_ERROR
+#define IRQ_EPPI2_ERR IRQ_EPPI2_ERROR
#define IRQ_UART3_ERR IRQ_UART3_ERROR
#define IRQ_HOST_ERR IRQ_HOST_ERROR
#define IRQ_PIXC_ERR IRQ_PIXC_ERROR
help
CM-BF561 support for EVAL- and DEV-Board.
-config GENERIC_BF561_BOARD
- bool "Generic"
- help
- Generic or Custom board support.
-
endchoice
# arch/blackfin/mach-bf561/boards/Makefile
#
-obj-$(CONFIG_GENERIC_BF561_BOARD) += generic_board.o
obj-$(CONFIG_BFIN561_BLUETECHNIX_CM) += cm_bf561.o
obj-$(CONFIG_BFIN561_EZKIT) += ezkit.o
obj-$(CONFIG_BFIN561_TEPLA) += tepla.o
};
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
-static struct bfin5xx_spi_chip spi_mmc_chip_info = {
- .enable_dma = 1,
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
+static struct bfin5xx_spi_chip mmc_spi_chip_info = {
+ .enable_dma = 0,
.bits_per_word = 8,
};
#endif
.controller_data = &ad9960_spi_chip_info,
},
#endif
-#if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
+#if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE)
{
- .modalias = "spi_mmc",
+ .modalias = "mmc_spi",
.max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */
.bus_num = 0,
- .chip_select = CONFIG_SPI_MMC_CS_CHAN,
- .platform_data = NULL,
- .controller_data = &spi_mmc_chip_info,
+ .chip_select = 5,
+ .controller_data = &mmc_spi_chip_info,
.mode = SPI_MODE_3,
},
#endif
+++ /dev/null
-/*
- * File: arch/blackfin/mach-bf561/generic_board.c
- * Based on: arch/blackfin/mach-bf533/ezkit.c
- * Author: Aidan Williams <aidan@nicta.com.au>
- *
- * Created:
- * Description:
- *
- * Modified:
- * Copyright 2005 National ICT Australia (NICTA)
- * Copyright 2004-2006 Analog Devices Inc.
- *
- * Bugs: Enter bugs at http://blackfin.uclinux.org/
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see the file COPYING, or write
- * to the Free Software Foundation, Inc.,
- * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include <linux/device.h>
-#include <linux/platform_device.h>
-#include <linux/irq.h>
-
-const char bfin_board_name[] = "UNKNOWN BOARD";
-
-/*
- * Driver needs to know address, irq and flag pin.
- */
-#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
-static struct resource smc91x_resources[] = {
- {
- .start = 0x2C010300,
- .end = 0x2C010300 + 16,
- .flags = IORESOURCE_MEM,
- }, {
- .start = IRQ_PROG_INTB,
- .end = IRQ_PROG_INTB,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- }, {
- .start = IRQ_PF9,
- .end = IRQ_PF9,
- .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL,
- },
-};
-
-static struct platform_device smc91x_device = {
- .name = "smc91x",
- .id = 0,
- .num_resources = ARRAY_SIZE(smc91x_resources),
- .resource = smc91x_resources,
-};
-#endif
-
-#if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE)
-#ifdef CONFIG_BFIN_SIR0
-static struct resource bfin_sir0_resources[] = {
- {
- .start = 0xFFC00400,
- .end = 0xFFC004FF,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = IRQ_UART0_RX,
- .end = IRQ_UART0_RX+1,
- .flags = IORESOURCE_IRQ,
- },
- {
- .start = CH_UART0_RX,
- .end = CH_UART0_RX+1,
- .flags = IORESOURCE_DMA,
- },
-};
-
-static struct platform_device bfin_sir0_device = {
- .name = "bfin_sir",
- .id = 0,
- .num_resources = ARRAY_SIZE(bfin_sir0_resources),
- .resource = bfin_sir0_resources,
-};
-#endif
-#endif
-
-static struct platform_device *generic_board_devices[] __initdata = {
-#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
- &smc91x_device,
-#endif
-
-#if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE)
-#ifdef CONFIG_BFIN_SIR0
- &bfin_sir0_device,
-#endif
-#endif
-};
-
-static int __init generic_board_init(void)
-{
- printk(KERN_INFO "%s(): registering device resources\n", __func__);
- return platform_add_devices(generic_board_devices,
- ARRAY_SIZE(generic_board_devices));
-}
-
-arch_initcall(generic_board_init);
* File: include/asm-blackfin/mach-bf561/anomaly.h
* Bugs: Enter bugs at http://blackfin.uclinux.org/
*
- * Copyright (C) 2004-2008 Analog Devices Inc.
+ * Copyright (C) 2004-2009 Analog Devices Inc.
* Licensed under the GPL-2 or later.
*/
#define ANOMALY_05000301 (1)
/* SSYNCs After Writes To DMA MMR Registers May Not Be Handled Correctly */
#define ANOMALY_05000302 (1)
-/* New Feature: Additional Hysteresis on SPORT Input Pins (Not Available On Older Silicon) */
+/* SPORT_HYS Bit in PLL_CTL Register Is Not Functional */
#define ANOMALY_05000305 (__SILICON_REVISION__ < 5)
/* SCKELOW Bit Does Not Maintain State Through Hibernate */
#define ANOMALY_05000307 (__SILICON_REVISION__ < 5)
#define ANOMALY_05000273 (0)
#define ANOMALY_05000311 (0)
#define ANOMALY_05000353 (1)
+#define ANOMALY_05000380 (0)
#define ANOMALY_05000386 (1)
#define ANOMALY_05000432 (0)
#define ANOMALY_05000435 (0)
+#define ANOMALY_05000447 (0)
+#define ANOMALY_05000448 (0)
#endif
CH_UART_TX,
CH_UART_RX,
#endif
-#ifdef CONFIG_BFIN_UART0_CTSRTS
+#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
#if (CONFIG_BOOT_LOAD & 0x3)
# error "The kernel load address must be 4 byte aligned"
#endif
+
+/* The entire kernel must be able to make a 24bit pcrel call to start of L1 */
+#if ((0xffffffff - L1_CODE_START + 1) + CONFIG_BOOT_LOAD) > 0x1000000
+# error "The kernel load address is too high; keep it below 10meg for safety"
+#endif
+
+#if ANOMALY_05000448
+# error You are using a part with anomaly 05000448, this issue causes random memory read/write failures - that means random crashes.
+#endif
/* Invalidate all instruction cache lines assocoiated with this memory area */
ENTRY(_blackfin_icache_flush_range)
+/*
+ * Walkaround to avoid loading wrong instruction after invalidating icache
+ * and following sequence is met.
+ *
+ * 1) One instruction address is cached in the instruction cache.
+ * 2) This instruction in SDRAM is changed.
+ * 3) IFLASH[P0] is executed only once in blackfin_icache_flush_range().
+ * 4) This instruction is executed again, but the old one is loaded.
+ */
+ P0 = R0;
+ IFLUSH[P0];
do_flush IFLUSH, , nop
ENDPROC(_blackfin_icache_flush_range)
/* Flush all cache lines assocoiated with this area of memory. */
ENTRY(_blackfin_icache_dcache_flush_range)
+/*
+ * Walkaround to avoid loading wrong instruction after invalidating icache
+ * and following sequence is met.
+ *
+ * 1) One instruction address is cached in the instruction cache.
+ * 2) This instruction in SDRAM is changed.
+ * 3) IFLASH[P0] is executed only once in blackfin_icache_flush_range().
+ * 4) This instruction is executed again, but the old one is loaded.
+ */
+ P0 = R0;
+ IFLUSH[P0];
do_flush FLUSH, IFLUSH
ENDPROC(_blackfin_icache_dcache_flush_range)
#define SDGCTL_WIDTH (1 << 31) /* SDRAM external data path width */
#define PLL_CTL_VAL \
(((CONFIG_VCO_MULT & 63) << 9) | CLKIN_HALF | \
- (PLL_BYPASS << 8) | (ANOMALY_05000265 ? 0x8000 : 0))
+ (PLL_BYPASS << 8) | (ANOMALY_05000305 ? 0 : 0x8000))
__attribute__((l1_text))
static void do_sync(void)
#endif
#ifdef PINT0_ASSIGN
+ PM_SYS_PUSH(PINT0_MASK_SET)
+ PM_SYS_PUSH(PINT1_MASK_SET)
+ PM_SYS_PUSH(PINT2_MASK_SET)
+ PM_SYS_PUSH(PINT3_MASK_SET)
PM_SYS_PUSH(PINT0_ASSIGN)
PM_SYS_PUSH(PINT1_ASSIGN)
PM_SYS_PUSH(PINT2_ASSIGN)
PM_SYS_PUSH(PINT3_ASSIGN)
+ PM_SYS_PUSH(PINT0_INVERT_SET)
+ PM_SYS_PUSH(PINT1_INVERT_SET)
+ PM_SYS_PUSH(PINT2_INVERT_SET)
+ PM_SYS_PUSH(PINT3_INVERT_SET)
+ PM_SYS_PUSH(PINT0_EDGE_SET)
+ PM_SYS_PUSH(PINT1_EDGE_SET)
+ PM_SYS_PUSH(PINT2_EDGE_SET)
+ PM_SYS_PUSH(PINT3_EDGE_SET)
#endif
PM_SYS_PUSH(EBIU_AMBCTL0)
PM_SYS_POP(EBIU_AMBCTL0)
#ifdef PINT0_ASSIGN
+ PM_SYS_POP(PINT3_EDGE_SET)
+ PM_SYS_POP(PINT2_EDGE_SET)
+ PM_SYS_POP(PINT1_EDGE_SET)
+ PM_SYS_POP(PINT0_EDGE_SET)
+ PM_SYS_POP(PINT3_INVERT_SET)
+ PM_SYS_POP(PINT2_INVERT_SET)
+ PM_SYS_POP(PINT1_INVERT_SET)
+ PM_SYS_POP(PINT0_INVERT_SET)
PM_SYS_POP(PINT3_ASSIGN)
PM_SYS_POP(PINT2_ASSIGN)
PM_SYS_POP(PINT1_ASSIGN)
PM_SYS_POP(PINT0_ASSIGN)
+ PM_SYS_POP(PINT3_MASK_SET)
+ PM_SYS_POP(PINT2_MASK_SET)
+ PM_SYS_POP(PINT1_MASK_SET)
+ PM_SYS_POP(PINT0_MASK_SET)
#endif
#ifdef SICA_IWR1
p2 = [p2];
[p2+(TASK_THREAD+THREAD_KSP)] = sp;
+#ifdef CONFIG_IPIPE
+ r0 = sp;
+ SP += -12;
+ call ___ipipe_syscall_root;
+ SP += 12;
+ cc = r0 == 1;
+ if cc jump .Lsyscall_really_exit;
+ cc = r0 == -1;
+ if cc jump .Lresume_userspace;
+ r3 = [sp + PT_R3];
+ r4 = [sp + PT_R4];
+ p0 = [sp + PT_ORIG_P0];
+#endif /* CONFIG_IPIPE */
/* Check the System Call */
r7 = __NR_syscall;
r7 = r7 & r4;
.Lsyscall_resched:
+#ifdef CONFIG_IPIPE
+ cc = BITTST(r7, TIF_IRQ_SYNC);
+ if !cc jump .Lsyscall_no_irqsync;
+ [--sp] = reti;
+ r0 = [sp++];
+ SP += -12;
+ call ___ipipe_sync_root;
+ SP += 12;
+ jump .Lresume_userspace_1;
+.Lsyscall_no_irqsync:
+#endif
cc = BITTST(r7, TIF_NEED_RESCHED);
if !cc jump .Lsyscall_sigpending;
.Lsyscall_really_exit:
r5 = [sp + PT_RESERVED];
rets = r5;
+#ifdef CONFIG_IPIPE
+ [--sp] = reti;
+ r5 = [sp++];
+#endif /* CONFIG_IPIPE */
rts;
ENDPROC(_system_call)
ENDPROC(_resume)
ENTRY(_ret_from_exception)
+#ifdef CONFIG_IPIPE
+ [--sp] = rets;
+ SP += -12;
+ call ___ipipe_check_root
+ SP += 12
+ rets = [sp++];
+ cc = r0 == 0;
+ if cc jump 4f; /* not on behalf of Linux, get out */
+#endif /* CONFIG_IPIPE */
p2.l = lo(IPEND);
p2.h = hi(IPEND);
rts;
ENDPROC(_ret_from_exception)
+#ifdef CONFIG_IPIPE
+
+_sync_root_irqs:
+ [--sp] = reti; /* Reenable interrupts */
+ r0 = [sp++];
+ jump.l ___ipipe_sync_root
+
+_resume_kernel_from_int:
+ r0.l = _sync_root_irqs
+ r0.h = _sync_root_irqs
+ [--sp] = rets;
+ [--sp] = ( r7:4, p5:3 );
+ SP += -12;
+ call ___ipipe_call_irqtail
+ SP += 12;
+ ( r7:4, p5:3 ) = [sp++];
+ rets = [sp++];
+ rts
+#else
+#define _resume_kernel_from_int 2f
+#endif
+
ENTRY(_return_from_int)
/* If someone else already raised IRQ 15, do nothing. */
csync;
r1 = r0 - r1;
r2 = r0 & r1;
cc = r2 == 0;
- if !cc jump 2f;
+ if !cc jump _resume_kernel_from_int;
/* Lower the interrupt level to 15. */
p0.l = lo(EVT15);
#ifdef CONFIG_IPIPE
ENTRY(___ipipe_call_irqtail)
+ p0 = r0;
r0.l = 1f;
r0.h = 1f;
reti = r0;
1:
[--sp] = rets;
[--sp] = ( r7:4, p5:3 );
- p0.l = ___ipipe_irq_tail_hook;
- p0.h = ___ipipe_irq_tail_hook;
- p0 = [p0];
sp += -12;
call (p0);
sp += 12;
p0.h = hi(EVT14);
[p0] = r0;
csync;
- r0 = 0x401f;
+ r0 = 0x401f (z);
sti r0;
raise 14;
[--sp] = reti; /* IRQs on. */
p0.h = _bfin_irq_flags;
r0 = [p0];
sti r0;
-#if 0 /* FIXME: this actually raises scheduling latencies */
- /* Reenable interrupts */
- [--sp] = reti;
- r0 = [sp++];
-#endif
rts;
ENDPROC(___ipipe_call_irqtail)
+
#endif /* CONFIG_IPIPE */
static void bfin_internal_mask_irq(unsigned int irq)
{
+ unsigned long flags;
+
#ifdef CONFIG_BF53x
+ local_irq_save_hw(flags);
bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() &
~(1 << SIC_SYSIRQ(irq)));
#else
unsigned mask_bank, mask_bit;
+ local_irq_save_hw(flags);
mask_bank = SIC_SYSIRQ(irq) / 32;
mask_bit = SIC_SYSIRQ(irq) % 32;
bfin_write_SIC_IMASK(mask_bank, bfin_read_SIC_IMASK(mask_bank) &
~(1 << mask_bit));
#endif
#endif
+ local_irq_restore_hw(flags);
}
static void bfin_internal_unmask_irq(unsigned int irq)
{
+ unsigned long flags;
+
#ifdef CONFIG_BF53x
+ local_irq_save_hw(flags);
bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() |
(1 << SIC_SYSIRQ(irq)));
#else
unsigned mask_bank, mask_bit;
+ local_irq_save_hw(flags);
mask_bank = SIC_SYSIRQ(irq) / 32;
mask_bit = SIC_SYSIRQ(irq) % 32;
bfin_write_SIC_IMASK(mask_bank, bfin_read_SIC_IMASK(mask_bank) |
(1 << mask_bit));
#endif
#endif
+ local_irq_restore_hw(flags);
}
#ifdef CONFIG_PM
static inline void bfin_set_irq_handler(unsigned irq, irq_flow_handler_t handle)
{
#ifdef CONFIG_IPIPE
- _set_irq_handler(irq, handle_edge_irq);
+ _set_irq_handler(irq, handle_level_irq);
#else
struct irq_desc *desc = irq_desc + irq;
/* May not call generic set_irq_handler() due to spinlock
#endif
default:
#ifdef CONFIG_IPIPE
- /*
- * We want internal interrupt sources to be masked, because
- * ISRs may trigger interrupts recursively (e.g. DMA), but
- * interrupts are _not_ masked at CPU level. So let's handle
- * them as level interrupts.
- */
- set_irq_handler(irq, handle_level_irq);
+ /*
+ * We want internal interrupt sources to be
+ * masked, because ISRs may trigger interrupts
+ * recursively (e.g. DMA), but interrupts are
+ * _not_ masked at CPU level. So let's handle
+ * most of them as level interrupts, except
+ * the timer interrupt which is special.
+ */
+ if (irq == IRQ_SYSTMR || irq == IRQ_CORETMR)
+ set_irq_handler(irq, handle_simple_irq);
+ else
+ set_irq_handler(irq, handle_level_irq);
#else /* !CONFIG_IPIPE */
set_irq_handler(irq, handle_simple_irq);
#endif /* !CONFIG_IPIPE */
#ifdef CONFIG_IPIPE
for (irq = 0; irq < NR_IRQS; irq++) {
- struct irq_desc *desc = irq_desc + irq;
+ struct irq_desc *desc = irq_to_desc(irq);
desc->ic_prio = __ipipe_get_irq_priority(irq);
- desc->thr_prio = __ipipe_get_irqthread_priority(irq);
}
#endif /* CONFIG_IPIPE */
return IVG15;
}
-int __ipipe_get_irqthread_priority(unsigned irq)
-{
- int ient, prio;
- int demux_irq;
-
- /* The returned priority value is rescaled to [0..IVG13+1]
- * with 0 being the lowest effective priority level. */
-
- if (irq <= IRQ_CORETMR)
- return IVG13 - irq + 1;
-
- /* GPIO IRQs are given the priority of the demux
- * interrupt. */
- if (IS_GPIOIRQ(irq)) {
-#if defined(CONFIG_BF54x)
- u32 bank = PINT_2_BANK(irq2pint_lut[irq - SYS_IRQS]);
- demux_irq = (bank == 0 ? IRQ_PINT0 :
- bank == 1 ? IRQ_PINT1 :
- bank == 2 ? IRQ_PINT2 :
- IRQ_PINT3);
-#elif defined(CONFIG_BF561)
- demux_irq = (irq >= IRQ_PF32 ? IRQ_PROG2_INTA :
- irq >= IRQ_PF16 ? IRQ_PROG1_INTA :
- IRQ_PROG0_INTA);
-#elif defined(CONFIG_BF52x)
- demux_irq = (irq >= IRQ_PH0 ? IRQ_PORTH_INTA :
- irq >= IRQ_PG0 ? IRQ_PORTG_INTA :
- IRQ_PORTF_INTA);
-#else
- demux_irq = irq;
-#endif
- return IVG13 - PRIO_GPIODEMUX(demux_irq) + 1;
- }
-
- /* The GPIO demux interrupt is given a lower priority
- * than the GPIO IRQs, so that its threaded handler
- * unmasks the interrupt line after the decoded IRQs
- * have been processed. */
- prio = PRIO_GPIODEMUX(irq);
- /* demux irq? */
- if (prio != -1)
- return IVG13 - prio;
-
- for (ient = 0; ient < NR_PERI_INTS; ient++) {
- struct ivgx *ivg = ivg_table + ient;
- if (ivg->irqno == irq) {
- for (prio = 0; prio <= IVG13-IVG7; prio++) {
- if (ivg7_13[prio].ifirst <= ivg &&
- ivg7_13[prio].istop > ivg)
- return IVG7 - prio;
- }
- }
- }
-
- return 0;
-}
-
/* Hw interrupts are disabled on entry (check SAVE_CONTEXT). */
#ifdef CONFIG_DO_IRQ_L1
__attribute__((l1_text))
#endif
asmlinkage int __ipipe_grab_irq(int vec, struct pt_regs *regs)
{
+ struct ipipe_percpu_domain_data *p = ipipe_root_cpudom_ptr();
+ struct ipipe_domain *this_domain = ipipe_current_domain;
struct ivgx *ivg_stop = ivg7_13[vec-IVG7].istop;
struct ivgx *ivg = ivg7_13[vec-IVG7].ifirst;
- int irq;
+ int irq, s;
if (likely(vec == EVT_IVTMR_P)) {
irq = IRQ_CORETMR;
- goto handle_irq;
+ goto core_tick;
}
SSYNC();
irq = ivg->irqno;
if (irq == IRQ_SYSTMR) {
+#ifdef CONFIG_GENERIC_CLOCKEVENTS
+core_tick:
+#else
bfin_write_TIMER_STATUS(1); /* Latch TIMIL0 */
+#endif
/* This is basically what we need from the register frame. */
__raw_get_cpu_var(__ipipe_tick_regs).ipend = regs->ipend;
__raw_get_cpu_var(__ipipe_tick_regs).pc = regs->pc;
- if (!ipipe_root_domain_p)
- __raw_get_cpu_var(__ipipe_tick_regs).ipend |= 0x10;
- else
+ if (this_domain != ipipe_root_domain)
__raw_get_cpu_var(__ipipe_tick_regs).ipend &= ~0x10;
+ else
+ __raw_get_cpu_var(__ipipe_tick_regs).ipend |= 0x10;
}
-handle_irq:
+#ifndef CONFIG_GENERIC_CLOCKEVENTS
+core_tick:
+#endif
+ if (this_domain == ipipe_root_domain) {
+ s = __test_and_set_bit(IPIPE_SYNCDEFER_FLAG, &p->status);
+ barrier();
+ }
ipipe_trace_irq_entry(irq);
__ipipe_handle_irq(irq, regs);
- ipipe_trace_irq_exit(irq);
+ ipipe_trace_irq_exit(irq);
- if (ipipe_root_domain_p)
- return !test_bit(IPIPE_STALL_FLAG, &ipipe_root_cpudom_var(status));
+ if (this_domain == ipipe_root_domain) {
+ set_thread_flag(TIF_IRQ_SYNC);
+ if (!s) {
+ __clear_bit(IPIPE_SYNCDEFER_FLAG, &p->status);
+ return !test_bit(IPIPE_STALL_FLAG, &p->status);
+ }
+ }
return 0;
}
kfree(msg);
break;
case BFIN_IPI_CALL_FUNC:
+ spin_unlock(&msg_queue->lock);
ipi_call_function(cpu, msg);
+ spin_lock(&msg_queue->lock);
break;
case BFIN_IPI_CPU_STOP:
+ spin_unlock(&msg_queue->lock);
ipi_cpu_stop(cpu);
+ spin_lock(&msg_queue->lock);
kfree(msg);
break;
default:
smp_flush_data.start = start;
smp_flush_data.end = end;
- if (smp_call_function(&ipi_flush_icache, &smp_flush_data, 1))
+ if (smp_call_function(&ipi_flush_icache, &smp_flush_data, 0))
printk(KERN_WARNING "SMP: failed to run I-cache flush request on other CPUs\n");
}
EXPORT_SYMBOL_GPL(smp_icache_flush_range_others);
}
}
-asmlinkage void init_pda(void)
+asmlinkage void __init init_pda(void)
{
unsigned int cpu = raw_smp_processor_id();
if (SN_DMA_ADDRTYPE(dma_flags) == SN_DMA_ADDR_PHYS)
pci_addr = IS_PIC_SOFT(pcibus_info) ?
PHYS_TO_DMA(paddr) :
- PHYS_TO_TIODMA(paddr) | dma_attributes;
+ PHYS_TO_TIODMA(paddr);
else
- pci_addr = IS_PIC_SOFT(pcibus_info) ?
- paddr :
- paddr | dma_attributes;
+ pci_addr = paddr;
+ pci_addr |= dma_attributes;
/* Handle Bus mode */
if (IS_PCIX(pcibus_info))
#include <asm/coldfire.h>
#include <asm/mcfsim.h>
#include <asm/mcfdma.h>
+#include <asm/mcfuart.h>
/***************************************************************************/
#include <asm/coldfire.h>
#include <asm/mcfsim.h>
#include <asm/mcfuart.h>
-#include <asm/mcfqspi.h>
#ifdef CONFIG_MTD_PARTITIONS
#include <linux/mtd/partitions.h>
/***************************************************************************/
void coldfire_reset(void);
-static void coldfire_qspi_cs_control(u8 cs, u8 command);
-
-/***************************************************************************/
-
-#if defined(CONFIG_SPI)
-
-#if defined(CONFIG_WILDFIRE)
-#define SPI_NUM_CHIPSELECTS 0x02
-#define SPI_PAR_VAL 0x07 /* Enable DIN, DOUT, CLK */
-#define SPI_CS_MASK 0x18
-
-#define FLASH_BLOCKSIZE (1024*64)
-#define FLASH_NUMBLOCKS 16
-#define FLASH_TYPE "m25p80"
-
-#define M25P80_CS 0
-#define MMC_CS 1
-
-#ifdef CONFIG_MTD_PARTITIONS
-static struct mtd_partition stm25p_partitions[] = {
- /* sflash */
- [0] = {
- .name = "stm25p80",
- .offset = 0x00000000,
- .size = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS,
- .mask_flags = 0
- }
-};
-
-#endif
-
-#elif defined(CONFIG_WILDFIREMOD)
-
-#define SPI_NUM_CHIPSELECTS 0x08
-#define SPI_PAR_VAL 0x07 /* Enable DIN, DOUT, CLK */
-#define SPI_CS_MASK 0x78
-
-#define FLASH_BLOCKSIZE (1024*64)
-#define FLASH_NUMBLOCKS 64
-#define FLASH_TYPE "m25p32"
-/* Reserve 1M for the kernel parition */
-#define FLASH_KERNEL_SIZE (1024 * 1024)
-
-#define M25P80_CS 5
-#define MMC_CS 6
-
-#ifdef CONFIG_MTD_PARTITIONS
-static struct mtd_partition stm25p_partitions[] = {
- /* sflash */
- [0] = {
- .name = "kernel",
- .offset = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS - FLASH_KERNEL_SIZE,
- .size = FLASH_KERNEL_SIZE,
- .mask_flags = 0
- },
- [1] = {
- .name = "image",
- .offset = 0x00000000,
- .size = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS - FLASH_KERNEL_SIZE,
- .mask_flags = 0
- },
- [2] = {
- .name = "all",
- .offset = 0x00000000,
- .size = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS,
- .mask_flags = 0
- }
-};
-#endif
-
-#else
-#define SPI_NUM_CHIPSELECTS 0x04
-#define SPI_PAR_VAL 0x7F /* Enable DIN, DOUT, CLK, CS0 - CS4 */
-#endif
-
-#ifdef MMC_CS
-static struct coldfire_spi_chip flash_chip_info = {
- .mode = SPI_MODE_0,
- .bits_per_word = 16,
- .del_cs_to_clk = 17,
- .del_after_trans = 1,
- .void_write_data = 0
-};
-
-static struct coldfire_spi_chip mmc_chip_info = {
- .mode = SPI_MODE_0,
- .bits_per_word = 16,
- .del_cs_to_clk = 17,
- .del_after_trans = 1,
- .void_write_data = 0xFFFF
-};
-#endif
-
-#ifdef M25P80_CS
-static struct flash_platform_data stm25p80_platform_data = {
- .name = "ST M25P80 SPI Flash chip",
-#ifdef CONFIG_MTD_PARTITIONS
- .parts = stm25p_partitions,
- .nr_parts = sizeof(stm25p_partitions) / sizeof(*stm25p_partitions),
-#endif
- .type = FLASH_TYPE
-};
-#endif
-
-static struct spi_board_info spi_board_info[] __initdata = {
-#ifdef M25P80_CS
- {
- .modalias = "m25p80",
- .max_speed_hz = 16000000,
- .bus_num = 1,
- .chip_select = M25P80_CS,
- .platform_data = &stm25p80_platform_data,
- .controller_data = &flash_chip_info
- },
-#endif
-#ifdef MMC_CS
- {
- .modalias = "mmc_spi",
- .max_speed_hz = 16000000,
- .bus_num = 1,
- .chip_select = MMC_CS,
- .controller_data = &mmc_chip_info
- }
-#endif
-};
-
-static struct coldfire_spi_master coldfire_master_info = {
- .bus_num = 1,
- .num_chipselect = SPI_NUM_CHIPSELECTS,
- .irq_source = MCF5282_QSPI_IRQ_SOURCE,
- .irq_vector = MCF5282_QSPI_IRQ_VECTOR,
- .irq_mask = ((0x01 << MCF5282_QSPI_IRQ_SOURCE) | 0x01),
- .irq_lp = 0x2B, /* Level 5 and Priority 3 */
- .par_val = SPI_PAR_VAL,
- .cs_control = coldfire_qspi_cs_control,
-};
-
-static struct resource coldfire_spi_resources[] = {
- [0] = {
- .name = "qspi-par",
- .start = MCF5282_QSPI_PAR,
- .end = MCF5282_QSPI_PAR,
- .flags = IORESOURCE_MEM
- },
-
- [1] = {
- .name = "qspi-module",
- .start = MCF5282_QSPI_QMR,
- .end = MCF5282_QSPI_QMR + 0x18,
- .flags = IORESOURCE_MEM
- },
-
- [2] = {
- .name = "qspi-int-level",
- .start = MCF5282_INTC0 + MCFINTC_ICR0 + MCF5282_QSPI_IRQ_SOURCE,
- .end = MCF5282_INTC0 + MCFINTC_ICR0 + MCF5282_QSPI_IRQ_SOURCE,
- .flags = IORESOURCE_MEM
- },
-
- [3] = {
- .name = "qspi-int-mask",
- .start = MCF5282_INTC0 + MCFINTC_IMRL,
- .end = MCF5282_INTC0 + MCFINTC_IMRL,
- .flags = IORESOURCE_MEM
- }
-};
-
-static struct platform_device coldfire_spi = {
- .name = "spi_coldfire",
- .id = -1,
- .resource = coldfire_spi_resources,
- .num_resources = ARRAY_SIZE(coldfire_spi_resources),
- .dev = {
- .platform_data = &coldfire_master_info,
- }
-};
-
-static void coldfire_qspi_cs_control(u8 cs, u8 command)
-{
- u8 cs_bit = ((0x01 << cs) << 3) & SPI_CS_MASK;
-
-#if defined(CONFIG_WILDFIRE)
- u8 cs_mask = ~(((0x01 << cs) << 3) & SPI_CS_MASK);
-#endif
-#if defined(CONFIG_WILDFIREMOD)
- u8 cs_mask = (cs << 3) & SPI_CS_MASK;
-#endif
-
- /*
- * Don't do anything if the chip select is not
- * one of the port qs pins.
- */
- if (command & QSPI_CS_INIT) {
-#if defined(CONFIG_WILDFIRE)
- MCF5282_GPIO_DDRQS |= cs_bit;
- MCF5282_GPIO_PQSPAR &= ~cs_bit;
-#endif
-
-#if defined(CONFIG_WILDFIREMOD)
- MCF5282_GPIO_DDRQS |= SPI_CS_MASK;
- MCF5282_GPIO_PQSPAR &= ~SPI_CS_MASK;
-#endif
- }
-
- if (command & QSPI_CS_ASSERT) {
- MCF5282_GPIO_PORTQS &= ~SPI_CS_MASK;
- MCF5282_GPIO_PORTQS |= cs_mask;
- } else if (command & QSPI_CS_DROP) {
- MCF5282_GPIO_PORTQS |= SPI_CS_MASK;
- }
-}
-
-static int __init spi_dev_init(void)
-{
- int retval;
-
- retval = platform_device_register(&coldfire_spi);
- if (retval < 0)
- return retval;
-
- if (ARRAY_SIZE(spi_board_info))
- retval = spi_register_board_info(spi_board_info, ARRAY_SIZE(spi_board_info));
-
- return retval;
-}
-
-#endif /* CONFIG_SPI */
/***************************************************************************/
/*
* Architecture specific compatibility types
*/
+#include <linux/seccomp.h>
+#include <linux/thread_info.h>
#include <linux/types.h>
#include <asm/page.h>
#include <asm/ptrace.h>
compat_ulong_t __unused2;
};
+static inline int is_compat_task(void)
+{
+ return test_thread_flag(TIF_32BIT);
+}
+
#endif /* _ASM_COMPAT_H */
#ifndef __ASM_SECCOMP_H
-#include <linux/thread_info.h>
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
compat_ulong_t __unused6;
};
+static inline int is_compat_task(void)
+{
+ return test_thread_flag(TIF_32BIT);
+}
+
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_COMPAT_H */
#ifndef _ASM_POWERPC_SECCOMP_H
#define _ASM_POWERPC_SECCOMP_H
-#ifdef __KERNEL__
-#include <linux/thread_info.h>
-#endif
-
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
{
unsigned int val;
+ /* Do not do the fixup on other platforms! */
+ if (!machine_is(gef_sbc610))
+ return;
+
printk(KERN_INFO "Running NEC uPD720101 Fixup\n");
/* Ensure ports 1, 2, 3, 4 & 5 are enabled */
module_init(aes_s390_init);
module_exit(aes_s390_fini);
-MODULE_ALIAS("aes");
+MODULE_ALIAS("aes-all");
MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm");
MODULE_LICENSE("GPL");
#include <linux/gpio.h>
#include <linux/spi/spi.h>
#include <linux/spi/spi_gpio.h>
+#include <media/soc_camera.h>
#include <media/soc_camera_platform.h>
#include <media/sh_mobile_ceu.h>
#include <video/sh_mobile_lcdc.h>
unsigned int __unused2;
};
+static inline int is_compat_task(void)
+{
+ return test_thread_flag(TIF_32BIT);
+}
+
#endif /* _ASM_SPARC64_COMPAT_H */
#ifndef _ASM_SECCOMP_H
-#include <linux/thread_info.h> /* already defines TIF_32BIT */
-
-#ifndef TIF_32BIT
-#error "unexpected TIF_32BIT on sparc64"
-#endif
-
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
select HAVE_GENERIC_DMA_COHERENT if X86_32
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select USER_STACKTRACE_SUPPORT
+ select HAVE_KERNEL_GZIP
+ select HAVE_KERNEL_BZIP2
+ select HAVE_KERNEL_LZMA
config ARCH_DEFCONFIG
string
config HAVE_SETUP_PER_CPU_AREA
def_bool y
+config HAVE_DYNAMIC_PER_CPU_AREA
+ def_bool y
+
config HAVE_CPUMASK_OF_CPU_MAP
def_bool X86_64_SMP
Specify the maximum number of NUMA Nodes available on the target
system. Increases memory reserved to accomodate various tables.
-config HAVE_ARCH_BOOTMEM_NODE
+config HAVE_ARCH_BOOTMEM
def_bool y
depends on X86_32 && NUMA
config KEXEC_JUMP
bool "kexec jump (EXPERIMENTAL)"
depends on EXPERIMENTAL
- depends on KEXEC && HIBERNATION && X86_32
+ depends on KEXEC && HIBERNATION
---help---
Jump between original kernel and kexeced kernel and invoke
code in physical address mode via KEXEC
# create a compressed vmlinux image from the original vmlinux
#
-targets := vmlinux vmlinux.bin vmlinux.bin.gz head_$(BITS).o misc.o piggy.o
+targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma head_$(BITS).o misc.o piggy.o
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
ifdef CONFIG_RELOCATABLE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip)
+$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin.all FORCE
+ $(call if_changed,bzip2)
+$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin.all FORCE
+ $(call if_changed,lzma)
else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
+$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,bzip2)
+$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,lzma)
endif
LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
else
+
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
+$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,bzip2)
+$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,lzma)
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
endif
-$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE
+suffix_$(CONFIG_KERNEL_GZIP) = gz
+suffix_$(CONFIG_KERNEL_BZIP2) = bz2
+suffix_$(CONFIG_KERNEL_LZMA) = lzma
+
+$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
$(call if_changed,ld)
/*
* gzip declarations
*/
-
-#define OF(args) args
#define STATIC static
#undef memset
#undef memcpy
#define memzero(s, n) memset((s), 0, (n))
-typedef unsigned char uch;
-typedef unsigned short ush;
-typedef unsigned long ulg;
-
-/*
- * Window size must be at least 32k, and a power of two.
- * We don't actually have a window just a huge output buffer,
- * so we report a 2G window size, as that should always be
- * larger than our output buffer:
- */
-#define WSIZE 0x80000000
-
-/* Input buffer: */
-static unsigned char *inbuf;
-
-/* Sliding window buffer (and final output buffer): */
-static unsigned char *window;
-
-/* Valid bytes in inbuf: */
-static unsigned insize;
-
-/* Index of next byte to be processed in inbuf: */
-static unsigned inptr;
-
-/* Bytes in output buffer: */
-static unsigned outcnt;
-
-/* gzip flag byte */
-#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
-#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gz file */
-#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
-#define ORIG_NAM 0x08 /* bit 3 set: original file name present */
-#define COMMENT 0x10 /* bit 4 set: file comment present */
-#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
-#define RESERVED 0xC0 /* bit 6, 7: reserved */
-
-#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
-
-/* Diagnostic functions */
-#ifdef DEBUG
-# define Assert(cond, msg) do { if (!(cond)) error(msg); } while (0)
-# define Trace(x) do { fprintf x; } while (0)
-# define Tracev(x) do { if (verbose) fprintf x ; } while (0)
-# define Tracevv(x) do { if (verbose > 1) fprintf x ; } while (0)
-# define Tracec(c, x) do { if (verbose && (c)) fprintf x ; } while (0)
-# define Tracecv(c, x) do { if (verbose > 1 && (c)) fprintf x ; } while (0)
-#else
-# define Assert(cond, msg)
-# define Trace(x)
-# define Tracev(x)
-# define Tracevv(x)
-# define Tracec(c, x)
-# define Tracecv(c, x)
-#endif
-static int fill_inbuf(void);
-static void flush_window(void);
static void error(char *m);
/*
static struct boot_params *real_mode; /* Pointer to real-mode data */
static int quiet;
-extern unsigned char input_data[];
-extern int input_len;
-
-static long bytes_out;
-
static void *memset(void *s, int c, unsigned n);
-static void *memcpy(void *dest, const void *src, unsigned n);
+void *memcpy(void *dest, const void *src, unsigned n);
static void __putstr(int, const char *);
#define putstr(__x) __putstr(0, __x)
static int vidport;
static int lines, cols;
-#include "../../../../lib/inflate.c"
+#ifdef CONFIG_KERNEL_GZIP
+#include "../../../../lib/decompress_inflate.c"
+#endif
+
+#ifdef CONFIG_KERNEL_BZIP2
+#include "../../../../lib/decompress_bunzip2.c"
+#endif
+
+#ifdef CONFIG_KERNEL_LZMA
+#include "../../../../lib/decompress_unlzma.c"
+#endif
static void scroll(void)
{
return s;
}
-static void *memcpy(void *dest, const void *src, unsigned n)
+void *memcpy(void *dest, const void *src, unsigned n)
{
int i;
const char *s = src;
return dest;
}
-/* ===========================================================================
- * Fill the input buffer. This is called only when the buffer is empty
- * and at least one byte is really needed.
- */
-static int fill_inbuf(void)
-{
- error("ran out of input data");
- return 0;
-}
-
-/* ===========================================================================
- * Write the output window window[0..outcnt-1] and update crc and bytes_out.
- * (Used for the decompressed data only.)
- */
-static void flush_window(void)
-{
- /* With my window equal to my output buffer
- * I only need to compute the crc here.
- */
- unsigned long c = crc; /* temporary variable */
- unsigned n;
- unsigned char *in, ch;
-
- in = window;
- for (n = 0; n < outcnt; n++) {
- ch = *in++;
- c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
- }
- crc = c;
- bytes_out += (unsigned long)outcnt;
- outcnt = 0;
-}
static void error(char *x)
{
lines = real_mode->screen_info.orig_video_lines;
cols = real_mode->screen_info.orig_video_cols;
- window = output; /* Output buffer (Normally at 1M) */
free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
- inbuf = input_data; /* Input buffer */
- insize = input_len;
- inptr = 0;
#ifdef CONFIG_X86_64
if ((unsigned long)output & (__KERNEL_ALIGN - 1))
#endif
#endif
- makecrc();
if (!quiet)
putstr("\nDecompressing Linux... ");
- gunzip();
+ decompress(input_data, input_len, NULL, NULL, output, NULL, error);
parse_elf(output);
if (!quiet)
putstr("done.\nBooting the kernel.\n");
#define setup_secondary_clock setup_secondary_APIC_clock
#endif
+#ifdef CONFIG_X86_VSMP
extern int is_vsmp_box(void);
+#else
+static inline int is_vsmp_box(void)
+{
+ return 0;
+}
+#endif
extern void xapic_wait_icr_idle(void);
extern u32 safe_xapic_wait_icr_idle(void);
extern void xapic_icr_write(u32, u32);
void (*send_IPI_self)(int vector);
/* wakeup_secondary_cpu */
- int (*wakeup_cpu)(int apicid, unsigned long start_eip);
+ int (*wakeup_secondary_cpu)(int apicid, unsigned long start_eip);
int trampoline_phys_low;
int trampoline_phys_high;
u32 (*safe_wait_icr_idle)(void);
};
+/*
+ * Pointer to the local APIC driver in use on this system (there's
+ * always just one such driver in use - the kernel decides via an
+ * early probing process which one it picks - and then sticks to it):
+ */
extern struct apic *apic;
+/*
+ * APIC functionality to boot other CPUs - only used on SMP:
+ */
+#ifdef CONFIG_SMP
+extern atomic_t init_deasserted;
+extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
+#endif
+
static inline u32 apic_read(u32 reg)
{
return apic->read(reg);
static inline void ack_APIC_irq(void)
{
+#ifdef CONFIG_X86_LOCAL_APIC
/*
* ack_APIC_irq() actually gets compiled as a single instruction
* ... yummie.
/* Docs say use 0 for future compatibility */
apic_write(APIC_EOI, 0);
+#endif
}
static inline unsigned default_get_apic_id(unsigned long x)
#define DEFAULT_TRAMPOLINE_PHYS_LOW 0x467
#define DEFAULT_TRAMPOLINE_PHYS_HIGH 0x469
-#ifdef CONFIG_X86_32
-extern void es7000_update_apic_to_cluster(void);
-#else
+#ifdef CONFIG_X86_64
extern struct apic apic_flat;
extern struct apic apic_physflat;
extern struct apic apic_x2apic_cluster;
#define EXTENDED_VGA 0xfffe /* 80x50 mode */
#define ASK_VGA 0xfffd /* ask for it at bootup */
+#ifdef __KERNEL__
+
/* Physical address where kernel should be loaded. */
#define LOAD_PHYSICAL_ADDR ((CONFIG_PHYSICAL_START \
+ (CONFIG_PHYSICAL_ALIGN - 1)) \
& ~(CONFIG_PHYSICAL_ALIGN - 1))
+#ifdef CONFIG_KERNEL_BZIP2
+#define BOOT_HEAP_SIZE 0x400000
+#else /* !CONFIG_KERNEL_BZIP2 */
+
#ifdef CONFIG_X86_64
#define BOOT_HEAP_SIZE 0x7000
-#define BOOT_STACK_SIZE 0x4000
#else
#define BOOT_HEAP_SIZE 0x4000
+#endif
+
+#endif /* !CONFIG_KERNEL_BZIP2 */
+
+#ifdef CONFIG_X86_64
+#define BOOT_STACK_SIZE 0x4000
+#else
#define BOOT_STACK_SIZE 0x1000
#endif
+#endif /* __KERNEL__ */
+
#endif /* _ASM_X86_BOOT_H */
#include <linux/mm.h>
/* Caches aren't brain-dead on the intel. */
-#define flush_cache_all() do { } while (0)
-#define flush_cache_mm(mm) do { } while (0)
-#define flush_cache_dup_mm(mm) do { } while (0)
-#define flush_cache_range(vma, start, end) do { } while (0)
-#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
-#define flush_dcache_page(page) do { } while (0)
-#define flush_dcache_mmap_lock(mapping) do { } while (0)
-#define flush_dcache_mmap_unlock(mapping) do { } while (0)
-#define flush_icache_range(start, end) do { } while (0)
-#define flush_icache_page(vma, pg) do { } while (0)
-#define flush_icache_user_range(vma, pg, adr, len) do { } while (0)
-#define flush_cache_vmap(start, end) do { } while (0)
-#define flush_cache_vunmap(start, end) do { } while (0)
+static inline void flush_cache_all(void) { }
+static inline void flush_cache_mm(struct mm_struct *mm) { }
+static inline void flush_cache_dup_mm(struct mm_struct *mm) { }
+static inline void flush_cache_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end) { }
+static inline void flush_cache_page(struct vm_area_struct *vma,
+ unsigned long vmaddr, unsigned long pfn) { }
+static inline void flush_dcache_page(struct page *page) { }
+static inline void flush_dcache_mmap_lock(struct address_space *mapping) { }
+static inline void flush_dcache_mmap_unlock(struct address_space *mapping) { }
+static inline void flush_icache_range(unsigned long start,
+ unsigned long end) { }
+static inline void flush_icache_page(struct vm_area_struct *vma,
+ struct page *page) { }
+static inline void flush_icache_user_range(struct vm_area_struct *vma,
+ struct page *page,
+ unsigned long addr,
+ unsigned long len) { }
+static inline void flush_cache_vmap(unsigned long start, unsigned long end) { }
+static inline void flush_cache_vunmap(unsigned long start,
+ unsigned long end) { }
-#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
- memcpy((dst), (src), (len))
-#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
- memcpy((dst), (src), (len))
+static inline void copy_to_user_page(struct vm_area_struct *vma,
+ struct page *page, unsigned long vaddr,
+ void *dst, const void *src,
+ unsigned long len)
+{
+ memcpy(dst, src, len);
+}
+
+static inline void copy_from_user_page(struct vm_area_struct *vma,
+ struct page *page, unsigned long vaddr,
+ void *dst, const void *src,
+ unsigned long len)
+{
+ memcpy(dst, src, len);
+}
#define PG_non_WB PG_arch_1
PAGEFLAG(NonWB, non_WB)
#else /* !CONFIG_X86_32 */
-#define MAX_EFI_IO_PAGES 100
-
extern u64 efi_call0(void *fp);
extern u64 efi_call1(void *fp, u64 arg1);
extern u64 efi_call2(void *fp, u64 arg1, u64 arg2);
smp_invalidate_interrupt)
#endif
+BUILD_INTERRUPT(generic_interrupt, GENERIC_INTERRUPT_VECTOR)
+
/*
* every pentium local APIC has two 'local interrupts', with a
* soft-definable vector attached to both interrupts, one of
+/*
+ * fixmap.h: compile-time virtual memory allocation
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1998 Ingo Molnar
+ *
+ * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
+ * x86_32 and x86_64 integration by Gustavo F. Padovan, February 2009
+ */
+
#ifndef _ASM_X86_FIXMAP_H
#define _ASM_X86_FIXMAP_H
+#ifndef __ASSEMBLY__
+#include <linux/kernel.h>
+#include <asm/acpi.h>
+#include <asm/apicdef.h>
+#include <asm/page.h>
+#ifdef CONFIG_X86_32
+#include <linux/threads.h>
+#include <asm/kmap_types.h>
+#else
+#include <asm/vsyscall.h>
+#endif
+
+/*
+ * We can't declare FIXADDR_TOP as variable for x86_64 because vsyscall
+ * uses fixmaps that relies on FIXADDR_TOP for proper address calculation.
+ * Because of this, FIXADDR_TOP x86 integration was left as later work.
+ */
+#ifdef CONFIG_X86_32
+/* used by vmalloc.c, vsyscall.lds.S.
+ *
+ * Leave one empty page between vmalloc'ed areas and
+ * the start of the fixmap.
+ */
+extern unsigned long __FIXADDR_TOP;
+#define FIXADDR_TOP ((unsigned long)__FIXADDR_TOP)
+
+#define FIXADDR_USER_START __fix_to_virt(FIX_VDSO)
+#define FIXADDR_USER_END __fix_to_virt(FIX_VDSO - 1)
+#else
+#define FIXADDR_TOP (VSYSCALL_END-PAGE_SIZE)
+
+/* Only covers 32bit vsyscalls currently. Need another set for 64bit. */
+#define FIXADDR_USER_START ((unsigned long)VSYSCALL32_VSYSCALL)
+#define FIXADDR_USER_END (FIXADDR_USER_START + PAGE_SIZE)
+#endif
+
+
+/*
+ * Here we define all the compile-time 'special' virtual
+ * addresses. The point is to have a constant address at
+ * compile time, but to set the physical address only
+ * in the boot process.
+ * for x86_32: We allocate these special addresses
+ * from the end of virtual memory (0xfffff000) backwards.
+ * Also this lets us do fail-safe vmalloc(), we
+ * can guarantee that these special addresses and
+ * vmalloc()-ed addresses never overlap.
+ *
+ * These 'compile-time allocated' memory buffers are
+ * fixed-size 4k pages (or larger if used with an increment
+ * higher than 1). Use set_fixmap(idx,phys) to associate
+ * physical memory with fixmap indices.
+ *
+ * TLB entries of such buffers will not be flushed across
+ * task switches.
+ */
+enum fixed_addresses {
#ifdef CONFIG_X86_32
-# include "fixmap_32.h"
+ FIX_HOLE,
+ FIX_VDSO,
#else
-# include "fixmap_64.h"
+ VSYSCALL_LAST_PAGE,
+ VSYSCALL_FIRST_PAGE = VSYSCALL_LAST_PAGE
+ + ((VSYSCALL_END-VSYSCALL_START) >> PAGE_SHIFT) - 1,
+ VSYSCALL_HPET,
#endif
+ FIX_DBGP_BASE,
+ FIX_EARLYCON_MEM_BASE,
+#ifdef CONFIG_X86_LOCAL_APIC
+ FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
+#endif
+#ifdef CONFIG_X86_IO_APIC
+ FIX_IO_APIC_BASE_0,
+ FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1,
+#endif
+#ifdef CONFIG_X86_VISWS_APIC
+ FIX_CO_CPU, /* Cobalt timer */
+ FIX_CO_APIC, /* Cobalt APIC Redirection Table */
+ FIX_LI_PCIA, /* Lithium PCI Bridge A */
+ FIX_LI_PCIB, /* Lithium PCI Bridge B */
+#endif
+#ifdef CONFIG_X86_F00F_BUG
+ FIX_F00F_IDT, /* Virtual mapping for IDT */
+#endif
+#ifdef CONFIG_X86_CYCLONE_TIMER
+ FIX_CYCLONE_TIMER, /*cyclone timer register*/
+#endif
+#ifdef CONFIG_X86_32
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+#ifdef CONFIG_PCI_MMCONFIG
+ FIX_PCIE_MCFG,
+#endif
+#endif
+#ifdef CONFIG_PARAVIRT
+ FIX_PARAVIRT_BOOTMAP,
+#endif
+ __end_of_permanent_fixed_addresses,
+#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
+ FIX_OHCI1394_BASE,
+#endif
+ /*
+ * 256 temporary boot-time mappings, used by early_ioremap(),
+ * before ioremap() is functional.
+ *
+ * We round it up to the next 256 pages boundary so that we
+ * can have a single pgd entry and a single pte table:
+ */
+#define NR_FIX_BTMAPS 64
+#define FIX_BTMAPS_SLOTS 4
+ FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
+ (__end_of_permanent_fixed_addresses & 255),
+ FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
+#ifdef CONFIG_X86_32
+ FIX_WP_TEST,
+#endif
+ __end_of_fixed_addresses
+};
+
+
+extern void reserve_top_address(unsigned long reserve);
+
+#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_BOOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+#define FIXADDR_BOOT_START (FIXADDR_TOP - FIXADDR_BOOT_SIZE)
extern int fixmaps_set;
BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
return __virt_to_fix(vaddr);
}
+#endif /* !__ASSEMBLY__ */
#endif /* _ASM_X86_FIXMAP_H */
+++ /dev/null
-/*
- * fixmap.h: compile-time virtual memory allocation
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- *
- * Copyright (C) 1998 Ingo Molnar
- *
- * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
- */
-
-#ifndef _ASM_X86_FIXMAP_32_H
-#define _ASM_X86_FIXMAP_32_H
-
-
-/* used by vmalloc.c, vsyscall.lds.S.
- *
- * Leave one empty page between vmalloc'ed areas and
- * the start of the fixmap.
- */
-extern unsigned long __FIXADDR_TOP;
-#define FIXADDR_USER_START __fix_to_virt(FIX_VDSO)
-#define FIXADDR_USER_END __fix_to_virt(FIX_VDSO - 1)
-
-#ifndef __ASSEMBLY__
-#include <linux/kernel.h>
-#include <asm/acpi.h>
-#include <asm/apicdef.h>
-#include <asm/page.h>
-#include <linux/threads.h>
-#include <asm/kmap_types.h>
-
-/*
- * Here we define all the compile-time 'special' virtual
- * addresses. The point is to have a constant address at
- * compile time, but to set the physical address only
- * in the boot process. We allocate these special addresses
- * from the end of virtual memory (0xfffff000) backwards.
- * Also this lets us do fail-safe vmalloc(), we
- * can guarantee that these special addresses and
- * vmalloc()-ed addresses never overlap.
- *
- * these 'compile-time allocated' memory buffers are
- * fixed-size 4k pages. (or larger if used with an increment
- * highger than 1) use fixmap_set(idx,phys) to associate
- * physical memory with fixmap indices.
- *
- * TLB entries of such buffers will not be flushed across
- * task switches.
- */
-enum fixed_addresses {
- FIX_HOLE,
- FIX_VDSO,
- FIX_DBGP_BASE,
- FIX_EARLYCON_MEM_BASE,
-#ifdef CONFIG_X86_LOCAL_APIC
- FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
-#endif
-#ifdef CONFIG_X86_IO_APIC
- FIX_IO_APIC_BASE_0,
- FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS-1,
-#endif
-#ifdef CONFIG_X86_VISWS_APIC
- FIX_CO_CPU, /* Cobalt timer */
- FIX_CO_APIC, /* Cobalt APIC Redirection Table */
- FIX_LI_PCIA, /* Lithium PCI Bridge A */
- FIX_LI_PCIB, /* Lithium PCI Bridge B */
-#endif
-#ifdef CONFIG_X86_F00F_BUG
- FIX_F00F_IDT, /* Virtual mapping for IDT */
-#endif
-#ifdef CONFIG_X86_CYCLONE_TIMER
- FIX_CYCLONE_TIMER, /*cyclone timer register*/
-#endif
- FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
- FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
-#ifdef CONFIG_PCI_MMCONFIG
- FIX_PCIE_MCFG,
-#endif
-#ifdef CONFIG_PARAVIRT
- FIX_PARAVIRT_BOOTMAP,
-#endif
- __end_of_permanent_fixed_addresses,
- /*
- * 256 temporary boot-time mappings, used by early_ioremap(),
- * before ioremap() is functional.
- *
- * We round it up to the next 256 pages boundary so that we
- * can have a single pgd entry and a single pte table:
- */
-#define NR_FIX_BTMAPS 64
-#define FIX_BTMAPS_SLOTS 4
- FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
- (__end_of_permanent_fixed_addresses & 255),
- FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
- FIX_WP_TEST,
-#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
- FIX_OHCI1394_BASE,
-#endif
- __end_of_fixed_addresses
-};
-
-extern void reserve_top_address(unsigned long reserve);
-
-
-#define FIXADDR_TOP ((unsigned long)__FIXADDR_TOP)
-
-#define __FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
-#define __FIXADDR_BOOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
-#define FIXADDR_START (FIXADDR_TOP - __FIXADDR_SIZE)
-#define FIXADDR_BOOT_START (FIXADDR_TOP - __FIXADDR_BOOT_SIZE)
-
-#endif /* !__ASSEMBLY__ */
-#endif /* _ASM_X86_FIXMAP_32_H */
+++ /dev/null
-/*
- * fixmap.h: compile-time virtual memory allocation
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- *
- * Copyright (C) 1998 Ingo Molnar
- */
-
-#ifndef _ASM_X86_FIXMAP_64_H
-#define _ASM_X86_FIXMAP_64_H
-
-#include <linux/kernel.h>
-#include <asm/acpi.h>
-#include <asm/apicdef.h>
-#include <asm/page.h>
-#include <asm/vsyscall.h>
-#include <asm/efi.h>
-
-/*
- * Here we define all the compile-time 'special' virtual
- * addresses. The point is to have a constant address at
- * compile time, but to set the physical address only
- * in the boot process.
- *
- * These 'compile-time allocated' memory buffers are
- * fixed-size 4k pages (or larger if used with an increment
- * higher than 1). Use set_fixmap(idx,phys) to associate
- * physical memory with fixmap indices.
- *
- * TLB entries of such buffers will not be flushed across
- * task switches.
- */
-
-enum fixed_addresses {
- VSYSCALL_LAST_PAGE,
- VSYSCALL_FIRST_PAGE = VSYSCALL_LAST_PAGE
- + ((VSYSCALL_END-VSYSCALL_START) >> PAGE_SHIFT) - 1,
- VSYSCALL_HPET,
- FIX_DBGP_BASE,
- FIX_EARLYCON_MEM_BASE,
- FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
- FIX_IO_APIC_BASE_0,
- FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1,
- FIX_EFI_IO_MAP_LAST_PAGE,
- FIX_EFI_IO_MAP_FIRST_PAGE = FIX_EFI_IO_MAP_LAST_PAGE
- + MAX_EFI_IO_PAGES - 1,
-#ifdef CONFIG_PARAVIRT
- FIX_PARAVIRT_BOOTMAP,
-#endif
- __end_of_permanent_fixed_addresses,
-#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
- FIX_OHCI1394_BASE,
-#endif
- /*
- * 256 temporary boot-time mappings, used by early_ioremap(),
- * before ioremap() is functional.
- *
- * We round it up to the next 256 pages boundary so that we
- * can have a single pgd entry and a single pte table:
- */
-#define NR_FIX_BTMAPS 64
-#define FIX_BTMAPS_SLOTS 4
- FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
- (__end_of_permanent_fixed_addresses & 255),
- FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
- __end_of_fixed_addresses
-};
-
-#define FIXADDR_TOP (VSYSCALL_END-PAGE_SIZE)
-#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
-#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
-
-/* Only covers 32bit vsyscalls currently. Need another set for 64bit. */
-#define FIXADDR_USER_START ((unsigned long)VSYSCALL32_VSYSCALL)
-#define FIXADDR_USER_END (FIXADDR_USER_START + PAGE_SIZE)
-
-#endif /* _ASM_X86_FIXMAP_64_H */
unsigned int apic_timer_irqs; /* arch dependent */
unsigned int irq_spurious_count;
#endif
+ unsigned int generic_irqs; /* arch dependent */
#ifdef CONFIG_SMP
unsigned int irq_resched_count;
unsigned int irq_call_count;
/* Interrupt handlers registered during init_IRQ */
extern void apic_timer_interrupt(void);
+extern void generic_interrupt(void);
extern void error_interrupt(void);
extern void spurious_interrupt(void);
extern void thermal_interrupt(void);
#else /* CONFIG_X86_32 */
-extern void finit(void);
+#ifdef CONFIG_MATH_EMULATION
+extern void finit_task(struct task_struct *tsk);
+#else
+static inline void finit_task(struct task_struct *tsk)
+{
+}
+#endif
static inline void tolerant_fwait(void)
{
--- /dev/null
+#ifndef _ASM_X86_INIT_32_H
+#define _ASM_X86_INIT_32_H
+
+#ifdef CONFIG_X86_32
+extern void __init early_ioremap_page_table_range_init(void);
+#endif
+
+extern unsigned long __init
+kernel_physical_mapping_init(unsigned long start,
+ unsigned long end,
+ unsigned long page_size_mask);
+
+
+extern unsigned long __initdata e820_table_start;
+extern unsigned long __meminitdata e820_table_end;
+extern unsigned long __meminitdata e820_table_top;
+
+#endif /* _ASM_X86_INIT_32_H */
extern void iounmap(volatile void __iomem *addr);
-extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys);
-
#ifdef CONFIG_X86_32
# include "io_32.h"
extern void __iomem *early_ioremap(unsigned long offset, unsigned long size);
extern void __iomem *early_memremap(unsigned long offset, unsigned long size);
extern void early_iounmap(void __iomem *addr, unsigned long size);
-extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys);
#define IO_SPACE_LIMIT 0xffff
extern void fixup_irqs(void);
#endif
+extern void (*generic_interrupt_extension)(void);
extern void init_IRQ(void);
extern void native_init_IRQ(void);
extern bool handle_irq(unsigned irq, struct pt_regs *regs);
*/
#define LOCAL_PERF_VECTOR 0xee
+/*
+ * Generic system vector for platform specific use
+ */
+#define GENERIC_INTERRUPT_VECTOR 0xed
+
/*
* First APIC vector available to drivers: (vectors 0x30-0xee) we
* start at 0x31(0x41) to spread out vectors evenly between priority
# define PAGES_NR 4
#else
# define PA_CONTROL_PAGE 0
-# define PA_TABLE_PAGE 1
-# define PAGES_NR 2
+# define VA_CONTROL_PAGE 1
+# define PA_TABLE_PAGE 2
+# define PA_SWAP_PAGE 3
+# define PAGES_NR 4
#endif
-#ifdef CONFIG_X86_32
# define KEXEC_CONTROL_CODE_MAX_SIZE 2048
-#endif
#ifndef __ASSEMBLY__
unsigned int has_pae,
unsigned int preserve_context);
#else
-NORET_TYPE void
+unsigned long
relocate_kernel(unsigned long indirection_page,
unsigned long page_list,
- unsigned long start_address) ATTRIB_NORET;
+ unsigned long start_address,
+ unsigned int preserve_context);
#endif
#define ARCH_HAS_KIMAGE_ARCH
#undef notrace
#define notrace __attribute__((no_instrument_function))
-#ifdef CONFIG_X86_64
-#define __ALIGN .p2align 4,,15
-#define __ALIGN_STR ".p2align 4,,15"
-#endif
-
#ifdef CONFIG_X86_32
#define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0)))
/*
__asmlinkage_protect_n(ret, "g" (arg1), "g" (arg2), "g" (arg3), \
"g" (arg4), "g" (arg5), "g" (arg6))
-#endif
+#endif /* CONFIG_X86_32 */
+
+#ifdef __ASSEMBLY__
#define GLOBAL(name) \
.globl name; \
name:
+#ifdef CONFIG_X86_64
+#define __ALIGN .p2align 4,,15
+#define __ALIGN_STR ".p2align 4,,15"
+#endif
+
#ifdef CONFIG_X86_ALIGNMENT_16
#define __ALIGN .align 16,0x90
#define __ALIGN_STR ".align 16,0x90"
#endif
+#endif /* __ASSEMBLY__ */
+
#endif /* _ASM_X86_LINKAGE_H */
#endif /* CONFIG_DISCONTIGMEM */
#ifdef CONFIG_NEED_MULTIPLE_NODES
-
-/*
- * Following are macros that are specific to this numa platform.
- */
-#define reserve_bootmem(addr, size, flags) \
- reserve_bootmem_node(NODE_DATA(0), (addr), (size), (flags))
-#define alloc_bootmem(x) \
- __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS))
-#define alloc_bootmem_nopanic(x) \
- __alloc_bootmem_node_nopanic(NODE_DATA(0), (x), SMP_CACHE_BYTES, \
- __pa(MAX_DMA_ADDRESS))
-#define alloc_bootmem_low(x) \
- __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, 0)
-#define alloc_bootmem_pages(x) \
- __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, __pa(MAX_DMA_ADDRESS))
-#define alloc_bootmem_pages_nopanic(x) \
- __alloc_bootmem_node_nopanic(NODE_DATA(0), (x), PAGE_SIZE, \
- __pa(MAX_DMA_ADDRESS))
-#define alloc_bootmem_low_pages(x) \
- __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, 0)
-#define alloc_bootmem_node(pgdat, x) \
-({ \
- struct pglist_data __maybe_unused \
- *__alloc_bootmem_node__pgdat = (pgdat); \
- __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, \
- __pa(MAX_DMA_ADDRESS)); \
-})
-#define alloc_bootmem_pages_node(pgdat, x) \
-({ \
- struct pglist_data __maybe_unused \
- *__alloc_bootmem_node__pgdat = (pgdat); \
- __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, \
- __pa(MAX_DMA_ADDRESS)); \
-})
-#define alloc_bootmem_low_pages_node(pgdat, x) \
-({ \
- struct pglist_data __maybe_unused \
- *__alloc_bootmem_node__pgdat = (pgdat); \
- __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, 0); \
-})
+/* always use node 0 for bootmem on this numa platform */
+#define bootmem_arch_preferred_node(__bdata, size, align, goal, limit) \
+ (NODE_DATA(0)->bdata)
#endif /* CONFIG_NEED_MULTIPLE_NODES */
#endif /* _ASM_X86_MMZONE_32_H */
extern int pxm_to_nid(int pxm);
extern void numa_remove_cpu(int cpu);
-#ifdef CONFIG_NUMA
+#ifdef CONFIG_HIGHMEM
extern void set_highmem_pages_init(void);
+#else
+static inline void set_highmem_pages_init(void)
+{
+}
#endif
#endif /* _ASM_X86_NUMA_32_H */
#ifndef __ASSEMBLY__
-struct pgprot;
-
extern int page_is_ram(unsigned long pagenr);
extern int devmem_is_allowed(unsigned long pagenr);
-extern void map_devmem(unsigned long pfn, unsigned long size,
- struct pgprot vma_prot);
-extern void unmap_devmem(unsigned long pfn, unsigned long size,
- struct pgprot vma_prot);
extern unsigned long max_low_pfn_mapped;
extern unsigned long max_pfn_mapped;
#define _ASM_X86_PAT_H
#include <linux/types.h>
+#include <asm/pgtable_types.h>
#ifdef CONFIG_X86_PAT
extern int pat_enabled;
extern int kernel_map_sync_memtype(u64 base, unsigned long size,
unsigned long flag);
+extern void map_devmem(unsigned long pfn, unsigned long size,
+ struct pgprot vma_prot);
+extern void unmap_devmem(unsigned long pfn, unsigned long size,
+ struct pgprot vma_prot);
#endif /* _ASM_X86_PAT_H */
#else /* ...!ASSEMBLY */
#include <linux/stringify.h>
+#include <asm/sections.h>
+
+#define __addr_to_pcpu_ptr(addr) \
+ (void *)((unsigned long)(addr) - (unsigned long)pcpu_base_addr \
+ + (unsigned long)__per_cpu_start)
+#define __pcpu_ptr_to_addr(ptr) \
+ (void *)((unsigned long)(ptr) + (unsigned long)pcpu_base_addr \
+ - (unsigned long)__per_cpu_start)
#ifdef CONFIG_SMP
#define __percpu_arg(x) "%%"__stringify(__percpu_seg)":%P" #x
return 1;
}
+pmd_t *populate_extra_pmd(unsigned long vaddr);
+pte_t *populate_extra_pte(unsigned long vaddr);
#endif /* __ASSEMBLY__ */
#ifdef CONFIG_X86_32
* area for the same reason. ;)
*/
#define VMALLOC_OFFSET (8 * 1024 * 1024)
+
+#ifndef __ASSEMBLER__
+extern bool __vmalloc_start_set; /* set once high_memory is set */
+#endif
+
#define VMALLOC_START ((unsigned long)high_memory + VMALLOC_OFFSET)
#ifdef CONFIG_X86_PAE
#define LAST_PKMAP 512
extern pteval_t __supported_pte_mask;
extern int nx_enabled;
+extern void set_nx(void);
#define pgprot_writecombine pgprot_writecombine
extern pgprot_t pgprot_writecombine(pgprot_t prot);
#define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long))
#define IO_BITMAP_OFFSET offsetof(struct tss_struct, io_bitmap)
#define INVALID_IO_BITMAP_OFFSET 0x8000
-#define INVALID_IO_BITMAP_OFFSET_LAZY 0x9000
struct tss_struct {
/*
* be within the limit.
*/
unsigned long io_bitmap[IO_BITMAP_LONGS + 1];
- /*
- * Cache the current maximum and the last task that used the bitmap:
- */
- unsigned long io_bitmap_max;
- struct thread_struct *io_bitmap_owner;
/*
* .. and then another 0x100 bytes for the emergency kernel stack:
#ifndef _ASM_X86_SECCOMP_32_H
#define _ASM_X86_SECCOMP_32_H
-#include <linux/thread_info.h>
-
-#ifdef TIF_32BIT
-#error "unexpected TIF_32BIT on i386"
-#endif
-
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
#ifndef _ASM_X86_SECCOMP_64_H
#define _ASM_X86_SECCOMP_64_H
-#include <linux/thread_info.h>
-
-#ifdef TIF_32BIT
-#error "unexpected TIF_32BIT on x86_64"
-#else
-#define TIF_32BIT TIF_IA32
-#endif
-
#include <linux/unistd.h>
#include <asm/ia32_unistd.h>
void (*smp_read_mpc_oem)(struct mpc_oemtable *oemtable,
unsigned short oemsize);
int (*setup_ioapic_ids)(void);
- int (*update_apic)(void);
};
extern void x86_quirk_pre_intr_init(void);
#include <asm/bootparam.h>
/* Interrupt control for vSMPowered x86_64 systems */
+#ifdef CONFIG_X86_VSMP
void vsmp_init(void);
+#else
+static inline void vsmp_init(void) { }
+#endif
void setup_bios_corruption_check(void);
static inline int is_visws_box(void) { return 0; }
#endif
-extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
-extern int wakeup_secondary_cpu_via_init(int apicid, unsigned long start_eip);
extern struct x86_quirks *x86_quirks;
extern unsigned long saved_video_mode;
struct task_struct; /* one of the stranger aspects of C forward declarations */
struct task_struct *__switch_to(struct task_struct *prev,
struct task_struct *next);
+struct tss_struct;
+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
+ struct tss_struct *tss);
#ifdef CONFIG_X86_32
}
static __always_inline unsigned long __copy_from_user_nocache(void *to,
- const void __user *from, unsigned long n, unsigned long total)
+ const void __user *from, unsigned long n)
{
might_fault();
if (__builtin_constant_p(n)) {
static __always_inline unsigned long
__copy_from_user_inatomic_nocache(void *to, const void __user *from,
- unsigned long n, unsigned long total)
+ unsigned long n)
{
return __copy_from_user_ll_nocache_nozero(to, from, n);
}
extern long __copy_user_nocache(void *dst, const void __user *src,
unsigned size, int zerorest);
-static inline int __copy_from_user_nocache(void *dst, const void __user *src,
- unsigned size, unsigned long total)
+static inline int
+__copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
{
might_sleep();
- /*
- * In practice this limit means that large file write()s
- * which get chunked to 4K copies get handled via
- * non-temporal stores here. Smaller writes get handled
- * via regular __copy_from_user():
- */
- if (likely(total >= PAGE_SIZE))
- return __copy_user_nocache(dst, src, size, 1);
- else
- return __copy_from_user(dst, src, size);
+ return __copy_user_nocache(dst, src, size, 1);
}
-static inline int __copy_from_user_inatomic_nocache(void *dst,
- const void __user *src, unsigned size, unsigned total)
+static inline int
+__copy_from_user_inatomic_nocache(void *dst, const void __user *src,
+ unsigned size)
{
- if (likely(total >= PAGE_SIZE))
- return __copy_user_nocache(dst, src, size, 0);
- else
- return __copy_from_user_inatomic(dst, src, size);
+ return __copy_user_nocache(dst, src, size, 0);
}
unsigned long
extern int is_uv_system(void);
extern void uv_cpu_init(void);
extern void uv_system_init(void);
-extern int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip);
extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
struct mm_struct *mm,
unsigned long va,
static inline int is_uv_system(void) { return 0; }
static inline void uv_cpu_init(void) { }
static inline void uv_system_init(void) { }
-static inline int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip)
-{ return 1; }
static inline const struct cpumask *
uv_flush_tlb_others(const struct cpumask *cpumask, struct mm_struct *mm,
unsigned long va, unsigned int cpu)
#define SCIR_CPU_ACTIVITY 0x02 /* not idle */
#define SCIR_CPU_HB_INTERVAL (HZ) /* once per second */
+/* Loop through all installed blades */
+#define for_each_possible_blade(bid) \
+ for ((bid) = 0; (bid) < uv_num_possible_blades(); (bid)++)
+
/*
* Macros for converting between kernel virtual addresses, socket local physical
* addresses, and UV global physical addresses.
xmaddr_t arbitrary_virt_to_machine(void *address);
+unsigned long arbitrary_virt_to_mfn(void *vaddr);
void make_lowmem_page_readonly(void *vaddr);
void make_lowmem_page_readwrite(void *vaddr);
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
-obj-y += vsmp_64.o
+obj-$(CONFIG_X86_VSMP) += vsmp_64.o
obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_MODULES) += module_$(BITS).o
obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o
###
# 64 bit specific files
ifeq ($(CONFIG_X86_64),y)
- obj-$(CONFIG_X86_UV) += tlb_uv.o bios_uv.o uv_irq.o uv_sysfs.o
+ obj-$(CONFIG_X86_UV) += tlb_uv.o bios_uv.o uv_irq.o uv_sysfs.o uv_time.o
obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o
obj-$(CONFIG_AUDIT) += audit_64.o
.send_IPI_all = flat_send_IPI_all,
.send_IPI_self = apic_send_IPI_self,
- .wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL,
.send_IPI_all = physflat_send_IPI_all,
.send_IPI_self = apic_send_IPI_self,
- .wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL,
#include <asm/apic.h>
#include <asm/ipi.h>
-static inline unsigned bigsmp_get_apic_id(unsigned long x)
+static unsigned bigsmp_get_apic_id(unsigned long x)
{
return (x >> 24) & 0xFF;
}
-static inline int bigsmp_apic_id_registered(void)
+static int bigsmp_apic_id_registered(void)
{
return 1;
}
-static inline const cpumask_t *bigsmp_target_cpus(void)
+static const cpumask_t *bigsmp_target_cpus(void)
{
#ifdef CONFIG_SMP
return &cpu_online_map;
#endif
}
-static inline unsigned long
-bigsmp_check_apicid_used(physid_mask_t bitmap, int apicid)
+static unsigned long bigsmp_check_apicid_used(physid_mask_t bitmap, int apicid)
{
return 0;
}
-static inline unsigned long bigsmp_check_apicid_present(int bit)
+static unsigned long bigsmp_check_apicid_present(int bit)
{
return 1;
}
* an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel
* document number 292116). So here it goes...
*/
-static inline void bigsmp_init_apic_ldr(void)
+static void bigsmp_init_apic_ldr(void)
{
unsigned long val;
int cpu = smp_processor_id();
apic_write(APIC_LDR, val);
}
-static inline void bigsmp_setup_apic_routing(void)
+static void bigsmp_setup_apic_routing(void)
{
printk(KERN_INFO
"Enabling APIC mode: Physflat. Using %d I/O APICs\n",
nr_ioapics);
}
-static inline int bigsmp_apicid_to_node(int logical_apicid)
+static int bigsmp_apicid_to_node(int logical_apicid)
{
return apicid_2_node[hard_smp_processor_id()];
}
-static inline int bigsmp_cpu_present_to_apicid(int mps_cpu)
+static int bigsmp_cpu_present_to_apicid(int mps_cpu)
{
if (mps_cpu < nr_cpu_ids)
return (int) per_cpu(x86_bios_cpu_apicid, mps_cpu);
return BAD_APICID;
}
-static inline physid_mask_t bigsmp_apicid_to_cpu_present(int phys_apicid)
+static physid_mask_t bigsmp_apicid_to_cpu_present(int phys_apicid)
{
return physid_mask_of_physid(phys_apicid);
}
return cpu_physical_id(cpu);
}
-static inline physid_mask_t bigsmp_ioapic_phys_id_map(physid_mask_t phys_map)
+static physid_mask_t bigsmp_ioapic_phys_id_map(physid_mask_t phys_map)
{
/* For clustered we don't have a good way to do this yet - hack */
return physids_promote(0xFFL);
}
-static inline void bigsmp_setup_portio_remap(void)
-{
-}
-
-static inline int bigsmp_check_phys_apicid_present(int boot_cpu_physical_apicid)
+static int bigsmp_check_phys_apicid_present(int boot_cpu_physical_apicid)
{
return 1;
}
/* As we are using single CPU as destination, pick only one CPU here */
-static inline unsigned int bigsmp_cpu_mask_to_apicid(const cpumask_t *cpumask)
+static unsigned int bigsmp_cpu_mask_to_apicid(const cpumask_t *cpumask)
{
return bigsmp_cpu_to_logical_apicid(first_cpu(*cpumask));
}
-static inline unsigned int
-bigsmp_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
+static unsigned int bigsmp_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
const struct cpumask *andmask)
{
int cpu;
return BAD_APICID;
}
-static inline int bigsmp_phys_pkg_id(int cpuid_apic, int index_msb)
+static int bigsmp_phys_pkg_id(int cpuid_apic, int index_msb)
{
return cpuid_apic >> index_msb;
}
default_send_IPI_mask_sequence_phys(mask, vector);
}
-static inline void bigsmp_send_IPI_allbutself(int vector)
+static void bigsmp_send_IPI_allbutself(int vector)
{
default_send_IPI_mask_allbutself_phys(cpu_online_mask, vector);
}
-static inline void bigsmp_send_IPI_all(int vector)
+static void bigsmp_send_IPI_all(int vector)
{
bigsmp_send_IPI_mask(cpu_online_mask, vector);
}
.send_IPI_all = bigsmp_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
- .wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
return 0;
}
-static int __init es7000_update_apic(void)
+static int es7000_apic_is_cluster(void)
{
- apic->wakeup_cpu = wakeup_secondary_cpu_via_mip;
-
/* MPENTIUMIII */
if (boot_cpu_data.x86 == 6 &&
- (boot_cpu_data.x86_model >= 7 || boot_cpu_data.x86_model <= 11)) {
- es7000_update_apic_to_cluster();
- apic->wait_for_init_deassert = NULL;
- apic->wakeup_cpu = wakeup_secondary_cpu_via_mip;
- }
+ (boot_cpu_data.x86_model >= 7 || boot_cpu_data.x86_model <= 11))
+ return 1;
return 0;
}
-static void __init setup_unisys(void)
+static void setup_unisys(void)
{
/*
* Determine the generation of the ES7000 currently running.
else
es7000_plat = ES7000_CLASSIC;
ioapic_renumber_irq = es7000_rename_gsi;
-
- x86_quirks->update_apic = es7000_update_apic;
}
/*
* Parse the OEM Table:
*/
-static int __init parse_unisys_oem(char *oemptr)
+static int parse_unisys_oem(char *oemptr)
{
int i;
int success = 0;
}
#ifdef CONFIG_ACPI
-static int __init find_unisys_acpi_oem_table(unsigned long *oem_addr)
+static int find_unisys_acpi_oem_table(unsigned long *oem_addr)
{
struct acpi_table_header *header = NULL;
struct es7000_oem_table *table;
return 0;
}
-static void __init unmap_unisys_acpi_oem_table(unsigned long oem_addr)
+static void unmap_unisys_acpi_oem_table(unsigned long oem_addr)
{
if (!oem_addr)
return;
return 0;
}
+static int es7000_acpi_ret;
+
/* Hook from generic ACPI tables.c */
-static int __init es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+static int es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
{
unsigned long oem_addr = 0;
int check_dsdt;
*/
unmap_unisys_acpi_oem_table(oem_addr);
}
- return ret;
+
+ es7000_acpi_ret = ret;
+
+ return ret && !es7000_apic_is_cluster();
}
+
+static int es7000_acpi_madt_oem_check_cluster(char *oem_id, char *oem_table_id)
+{
+ int ret = es7000_acpi_ret;
+
+ return ret && es7000_apic_is_cluster();
+}
+
#else /* !CONFIG_ACPI: */
-static int __init es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+static int es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+{
+ return 0;
+}
+
+static int es7000_acpi_madt_oem_check_cluster(char *oem_id, char *oem_table_id)
{
return 0;
}
rep_nop();
}
-static int __init
-es7000_mip_write(struct mip_reg *mip_reg)
+static int es7000_mip_write(struct mip_reg *mip_reg)
{
int status = 0;
int spin;
return status;
}
-static void __init es7000_enable_apic_mode(void)
+static void es7000_enable_apic_mode(void)
{
struct mip_reg es7000_mip_reg;
int mip_status;
static void es7000_wait_for_init_deassert(atomic_t *deassert)
{
-#ifndef CONFIG_ES7000_CLUSTERED_APIC
while (!atomic_read(deassert))
cpu_relax();
-#endif
- return;
}
static unsigned int es7000_get_apic_id(unsigned long x)
return 1;
}
-static unsigned int
-es7000_cpu_mask_to_apicid_cluster(const struct cpumask *cpumask)
-{
- int cpus_found = 0;
- int num_bits_set;
- int apicid;
- int cpu;
-
- num_bits_set = cpumask_weight(cpumask);
- /* Return id to all */
- if (num_bits_set == nr_cpu_ids)
- return 0xFF;
- /*
- * The cpus in the mask must all be on the apic cluster. If are not
- * on the same apicid cluster return default value of target_cpus():
- */
- cpu = cpumask_first(cpumask);
- apicid = es7000_cpu_to_logical_apicid(cpu);
-
- while (cpus_found < num_bits_set) {
- if (cpumask_test_cpu(cpu, cpumask)) {
- int new_apicid = es7000_cpu_to_logical_apicid(cpu);
-
- if (APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
- WARN(1, "Not a valid mask!");
-
- return 0xFF;
- }
- apicid = new_apicid;
- cpus_found++;
- }
- cpu++;
- }
- return apicid;
-}
-
static unsigned int es7000_cpu_mask_to_apicid(const cpumask_t *cpumask)
{
- int cpus_found = 0;
- int num_bits_set;
- int apicid;
- int cpu;
+ unsigned int round = 0;
+ int cpu, uninitialized_var(apicid);
- num_bits_set = cpus_weight(*cpumask);
- /* Return id to all */
- if (num_bits_set == nr_cpu_ids)
- return es7000_cpu_to_logical_apicid(0);
/*
- * The cpus in the mask must all be on the apic cluster. If are not
- * on the same apicid cluster return default value of target_cpus():
+ * The cpus in the mask must all be on the apic cluster.
*/
- cpu = first_cpu(*cpumask);
- apicid = es7000_cpu_to_logical_apicid(cpu);
- while (cpus_found < num_bits_set) {
- if (cpu_isset(cpu, *cpumask)) {
- int new_apicid = es7000_cpu_to_logical_apicid(cpu);
+ for_each_cpu(cpu, cpumask) {
+ int new_apicid = es7000_cpu_to_logical_apicid(cpu);
- if (APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
- printk("%s: Not a valid mask!\n", __func__);
+ if (round && APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
+ WARN(1, "Not a valid mask!");
- return es7000_cpu_to_logical_apicid(0);
- }
- apicid = new_apicid;
- cpus_found++;
+ return BAD_APICID;
}
- cpu++;
+ apicid = new_apicid;
+ round++;
}
return apicid;
}
return cpuid_apic >> index_msb;
}
-void __init es7000_update_apic_to_cluster(void)
-{
- apic->target_cpus = target_cpus_cluster;
- apic->irq_delivery_mode = dest_LowestPrio;
- /* logical delivery broadcast to all procs: */
- apic->irq_dest_mode = 1;
-
- apic->init_apic_ldr = es7000_init_apic_ldr_cluster;
-
- apic->cpu_mask_to_apicid = es7000_cpu_mask_to_apicid_cluster;
-}
-
static int probe_es7000(void)
{
/* probed later in mptable/ACPI hooks */
return 0;
}
-static __init int
-es7000_mps_oem_check(struct mpc_table *mpc, char *oem, char *productid)
+static int es7000_mps_ret;
+static int es7000_mps_oem_check(struct mpc_table *mpc, char *oem,
+ char *productid)
{
+ int ret = 0;
+
if (mpc->oemptr) {
struct mpc_oemtable *oem_table =
(struct mpc_oemtable *)mpc->oemptr;
if (!strncmp(oem, "UNISYS", 6))
- return parse_unisys_oem((char *)oem_table);
+ ret = parse_unisys_oem((char *)oem_table);
}
- return 0;
+
+ es7000_mps_ret = ret;
+
+ return ret && !es7000_apic_is_cluster();
}
+static int es7000_mps_oem_check_cluster(struct mpc_table *mpc, char *oem,
+ char *productid)
+{
+ int ret = es7000_mps_ret;
+
+ return ret && es7000_apic_is_cluster();
+}
+
+struct apic apic_es7000_cluster = {
+
+ .name = "es7000",
+ .probe = probe_es7000,
+ .acpi_madt_oem_check = es7000_acpi_madt_oem_check_cluster,
+ .apic_id_registered = es7000_apic_id_registered,
+
+ .irq_delivery_mode = dest_LowestPrio,
+ /* logical delivery broadcast to all procs: */
+ .irq_dest_mode = 1,
+
+ .target_cpus = target_cpus_cluster,
+ .disable_esr = 1,
+ .dest_logical = 0,
+ .check_apicid_used = es7000_check_apicid_used,
+ .check_apicid_present = es7000_check_apicid_present,
+
+ .vector_allocation_domain = es7000_vector_allocation_domain,
+ .init_apic_ldr = es7000_init_apic_ldr_cluster,
+
+ .ioapic_phys_id_map = es7000_ioapic_phys_id_map,
+ .setup_apic_routing = es7000_setup_apic_routing,
+ .multi_timer_check = NULL,
+ .apicid_to_node = es7000_apicid_to_node,
+ .cpu_to_logical_apicid = es7000_cpu_to_logical_apicid,
+ .cpu_present_to_apicid = es7000_cpu_present_to_apicid,
+ .apicid_to_cpu_present = es7000_apicid_to_cpu_present,
+ .setup_portio_remap = NULL,
+ .check_phys_apicid_present = es7000_check_phys_apicid_present,
+ .enable_apic_mode = es7000_enable_apic_mode,
+ .phys_pkg_id = es7000_phys_pkg_id,
+ .mps_oem_check = es7000_mps_oem_check_cluster,
+
+ .get_apic_id = es7000_get_apic_id,
+ .set_apic_id = NULL,
+ .apic_id_mask = 0xFF << 24,
+
+ .cpu_mask_to_apicid = es7000_cpu_mask_to_apicid,
+ .cpu_mask_to_apicid_and = es7000_cpu_mask_to_apicid_and,
+
+ .send_IPI_mask = es7000_send_IPI_mask,
+ .send_IPI_mask_allbutself = NULL,
+ .send_IPI_allbutself = es7000_send_IPI_allbutself,
+ .send_IPI_all = es7000_send_IPI_all,
+ .send_IPI_self = default_send_IPI_self,
+
+ .wakeup_secondary_cpu = wakeup_secondary_cpu_via_mip,
+
+ .trampoline_phys_low = 0x467,
+ .trampoline_phys_high = 0x469,
+
+ .wait_for_init_deassert = NULL,
+
+ /* Nothing to do for most platforms, since cleared by the INIT cycle: */
+ .smp_callin_clear_local_apic = NULL,
+ .inquire_remote_apic = default_inquire_remote_apic,
+
+ .read = native_apic_mem_read,
+ .write = native_apic_mem_write,
+ .icr_read = native_apic_icr_read,
+ .icr_write = native_apic_icr_write,
+ .wait_icr_idle = native_apic_wait_icr_idle,
+ .safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
+};
struct apic apic_es7000 = {
.send_IPI_all = es7000_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
- .wakeup_cpu = NULL,
-
.trampoline_phys_low = 0x467,
.trampoline_phys_high = 0x469,
/* x86_quirks member */
static int mpc_record;
-static __cpuinitdata struct mpc_trans *translation_table[MAX_MPC_ENTRY];
+static struct mpc_trans *translation_table[MAX_MPC_ENTRY];
int mp_bus_id_to_node[MAX_MP_BUSSES];
int mp_bus_id_to_local[MAX_MP_BUSSES];
return 1;
}
-static int __init numaq_update_apic(void)
-{
- apic->wakeup_cpu = wakeup_secondary_cpu_via_nmi;
-
- return 0;
-}
-
static struct x86_quirks numaq_x86_quirks __initdata = {
.arch_pre_time_init = numaq_pre_time_init,
.arch_time_init = NULL,
.mpc_oem_pci_bus = mpc_oem_pci_bus,
.smp_read_mpc_oem = smp_read_mpc_oem,
.setup_ioapic_ids = numaq_setup_ioapic_ids,
- .update_apic = numaq_update_apic,
};
static __init void early_check_numaq(void)
.send_IPI_all = numaq_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
- .wakeup_cpu = NULL,
+ .wakeup_secondary_cpu = wakeup_secondary_cpu_via_nmi,
.trampoline_phys_low = NUMAQ_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = NUMAQ_TRAMPOLINE_PHYS_HIGH,
.send_IPI_all = default_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
- .wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
extern struct apic apic_summit;
extern struct apic apic_bigsmp;
extern struct apic apic_es7000;
+extern struct apic apic_es7000_cluster;
extern struct apic apic_default;
struct apic *apic = &apic_default;
#endif
#ifdef CONFIG_X86_ES7000
&apic_es7000,
+ &apic_es7000_cluster,
#endif
&apic_default, /* must be last */
NULL,
}
}
- if (x86_quirks->update_apic)
- x86_quirks->update_apic();
-
/* Parsed again by __setup for debug/verbose */
return 0;
}
if (!cmdline_apic && apic == &apic_default) {
if (apic_bigsmp.probe()) {
apic = &apic_bigsmp;
- if (x86_quirks->update_apic)
- x86_quirks->update_apic();
printk(KERN_INFO "Overriding APIC driver with %s\n",
apic->name);
}
/* Not visible without early console */
if (!apic_probe[i])
panic("Didn't find an APIC driver");
-
- if (x86_quirks->update_apic)
- x86_quirks->update_apic();
}
printk(KERN_INFO "Using APIC driver %s\n", apic->name);
}
if (!cmdline_apic) {
apic = apic_probe[i];
- if (x86_quirks->update_apic)
- x86_quirks->update_apic();
printk(KERN_INFO "Switched to APIC driver `%s'.\n",
apic->name);
}
if (!cmdline_apic) {
apic = apic_probe[i];
- if (x86_quirks->update_apic)
- x86_quirks->update_apic();
printk(KERN_INFO "Switched to APIC driver `%s'.\n",
apic->name);
}
apic = &apic_physflat;
printk(KERN_INFO "Setting APIC routing to %s\n", apic->name);
}
-
- if (x86_quirks->update_apic)
- x86_quirks->update_apic();
}
/* Same for both flat and physical. */
extern int use_cyclone;
#ifdef CONFIG_X86_SUMMIT_NUMA
-extern void setup_summit(void);
+static void setup_summit(void);
#else
-#define setup_summit() {}
+static inline void setup_summit(void) {}
#endif
static int summit_mps_oem_check(struct mpc_table *mpc, char *oem,
static unsigned int summit_cpu_mask_to_apicid(const cpumask_t *cpumask)
{
- int cpus_found = 0;
- int num_bits_set;
- int apicid;
- int cpu;
+ unsigned int round = 0;
+ int cpu, apicid = 0;
- num_bits_set = cpus_weight(*cpumask);
- if (num_bits_set >= nr_cpu_ids)
- return BAD_APICID;
/*
* The cpus in the mask must all be on the apic cluster.
*/
- cpu = first_cpu(*cpumask);
- apicid = summit_cpu_to_logical_apicid(cpu);
-
- while (cpus_found < num_bits_set) {
- if (cpu_isset(cpu, *cpumask)) {
- int new_apicid = summit_cpu_to_logical_apicid(cpu);
+ for_each_cpu(cpu, cpumask) {
+ int new_apicid = summit_cpu_to_logical_apicid(cpu);
- if (APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
- printk("%s: Not a valid mask!\n", __func__);
-
- return BAD_APICID;
- }
- apicid = apicid | new_apicid;
- cpus_found++;
+ if (round && APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
+ printk("%s: Not a valid mask!\n", __func__);
+ return BAD_APICID;
}
- cpu++;
+ apicid |= new_apicid;
+ round++;
}
return apicid;
}
}
#ifdef CONFIG_X86_SUMMIT_NUMA
-static struct rio_table_hdr *rio_table_hdr __initdata;
-static struct scal_detail *scal_devs[MAX_NUMNODES] __initdata;
-static struct rio_detail *rio_devs[MAX_NUMNODES*4] __initdata;
+static struct rio_table_hdr *rio_table_hdr;
+static struct scal_detail *scal_devs[MAX_NUMNODES];
+static struct rio_detail *rio_devs[MAX_NUMNODES*4];
#ifndef CONFIG_X86_NUMAQ
-static int mp_bus_id_to_node[MAX_MP_BUSSES] __initdata;
+static int mp_bus_id_to_node[MAX_MP_BUSSES];
#endif
-static int __init setup_pci_node_map_for_wpeg(int wpeg_num, int last_bus)
+static int setup_pci_node_map_for_wpeg(int wpeg_num, int last_bus)
{
int twister = 0, node = 0;
int i, bus, num_buses;
return bus;
}
-static int __init build_detail_arrays(void)
+static int build_detail_arrays(void)
{
unsigned long ptr;
int i, scal_detail_size, rio_detail_size;
return 1;
}
-void __init setup_summit(void)
+void setup_summit(void)
{
unsigned long ptr;
unsigned short offset;
.send_IPI_all = summit_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
- .wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.send_IPI_all = x2apic_send_IPI_all,
.send_IPI_self = x2apic_send_IPI_self,
- .wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL,
.send_IPI_all = x2apic_send_IPI_all,
.send_IPI_self = x2apic_send_IPI_self,
- .wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL,
*
* Copyright (C) 2007-2008 Silicon Graphics, Inc. All rights reserved.
*/
-
-#include <linux/kernel.h>
-#include <linux/threads.h>
-#include <linux/cpu.h>
#include <linux/cpumask.h>
+#include <linux/hardirq.h>
+#include <linux/proc_fs.h>
+#include <linux/threads.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
#include <linux/string.h>
#include <linux/ctype.h>
-#include <linux/init.h>
#include <linux/sched.h>
-#include <linux/module.h>
-#include <linux/hardirq.h>
#include <linux/timer.h>
-#include <linux/proc_fs.h>
-#include <asm/current.h>
-#include <asm/smp.h>
-#include <asm/apic.h>
-#include <asm/ipi.h>
-#include <asm/pgtable.h>
-#include <asm/uv/uv.h>
+#include <linux/cpu.h>
+#include <linux/init.h>
+
#include <asm/uv/uv_mmrs.h>
#include <asm/uv/uv_hub.h>
+#include <asm/current.h>
+#include <asm/pgtable.h>
#include <asm/uv/bios.h>
+#include <asm/uv/uv.h>
+#include <asm/apic.h>
+#include <asm/ipi.h>
+#include <asm/smp.h>
DEFINE_PER_CPU(int, x2apic_extra_bits);
cpumask_set_cpu(cpu, retmask);
}
-int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip)
+static int uv_wakeup_secondary(int phys_apicid, unsigned long start_rip)
{
+#ifdef CONFIG_SMP
unsigned long val;
int pnode;
pnode = uv_apicid_to_pnode(phys_apicid);
val = (1UL << UVH_IPI_INT_SEND_SHFT) |
(phys_apicid << UVH_IPI_INT_APIC_ID_SHFT) |
- (((long)start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) |
+ ((start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) |
APIC_DM_INIT;
uv_write_global_mmr64(pnode, UVH_IPI_INT, val);
mdelay(10);
val = (1UL << UVH_IPI_INT_SEND_SHFT) |
(phys_apicid << UVH_IPI_INT_APIC_ID_SHFT) |
- (((long)start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) |
+ ((start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) |
APIC_DM_STARTUP;
uv_write_global_mmr64(pnode, UVH_IPI_INT, val);
+
+ atomic_set(&init_deasserted, 1);
+#endif
return 0;
}
.send_IPI_all = uv_send_IPI_all,
.send_IPI_self = uv_send_IPI_self,
- .wakeup_cpu = NULL,
+ .wakeup_secondary_cpu = uv_wakeup_secondary,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL,
paddr = base << shift;
bytes = (1UL << shift) * (max_pnode + 1);
printk(KERN_INFO "UV: Map %s_HI 0x%lx - 0x%lx\n", id, paddr,
- paddr + bytes);
+ paddr + bytes);
if (map_type == map_uc)
init_extra_mapping_uc(paddr, bytes);
else
/*
* Called on each cpu to initialize the per_cpu UV data area.
- * ZZZ hotplug not supported yet
+ * FIXME: hotplug not supported yet
*/
void __cpuinit uv_cpu_init(void)
{
#include <asm/io.h>
#include <asm/processor.h>
#include <asm/apic.h>
+#include <asm/cpu.h>
#ifdef CONFIG_X86_64
# include <asm/numa_64.h>
}
}
+static void __cpuinit amd_k7_smp_check(struct cpuinfo_x86 *c)
+{
+#ifdef CONFIG_SMP
+ /* calling is from identify_secondary_cpu() ? */
+ if (c->cpu_index == boot_cpu_id)
+ return;
+
+ /*
+ * Certain Athlons might work (for various values of 'work') in SMP
+ * but they are not certified as MP capable.
+ */
+ /* Athlon 660/661 is valid. */
+ if ((c->x86_model == 6) && ((c->x86_mask == 0) ||
+ (c->x86_mask == 1)))
+ goto valid_k7;
+
+ /* Duron 670 is valid */
+ if ((c->x86_model == 7) && (c->x86_mask == 0))
+ goto valid_k7;
+
+ /*
+ * Athlon 662, Duron 671, and Athlon >model 7 have capability
+ * bit. It's worth noting that the A5 stepping (662) of some
+ * Athlon XP's have the MP bit set.
+ * See http://www.heise.de/newsticker/data/jow-18.10.01-000 for
+ * more.
+ */
+ if (((c->x86_model == 6) && (c->x86_mask >= 2)) ||
+ ((c->x86_model == 7) && (c->x86_mask >= 1)) ||
+ (c->x86_model > 7))
+ if (cpu_has_mp)
+ goto valid_k7;
+
+ /* If we get here, not a certified SMP capable AMD system. */
+
+ /*
+ * Don't taint if we are running SMP kernel on a single non-MP
+ * approved Athlon
+ */
+ WARN_ONCE(1, "WARNING: This combination of AMD"
+ "processors is not suitable for SMP.\n");
+ if (!test_taint(TAINT_UNSAFE_SMP))
+ add_taint(TAINT_UNSAFE_SMP);
+
+valid_k7:
+ ;
+#endif
+}
+
static void __cpuinit init_amd_k7(struct cpuinfo_x86 *c)
{
u32 l, h;
}
set_cpu_cap(c, X86_FEATURE_K7);
+
+ amd_k7_smp_check(c);
}
#endif
if (!data)
return -ENOMEM;
- data->acpi_data = percpu_ptr(acpi_perf_data, cpu);
+ data->acpi_data = per_cpu_ptr(acpi_perf_data, cpu);
per_cpu(drv_data, cpu) = data;
if (cpu_has(c, X86_FEATURE_CONSTANT_TSC))
.name = "p4-clockmod",
.owner = THIS_MODULE,
.attr = p4clockmod_attr,
- .hide_interface = 1,
};
#include <asm/uaccess.h>
#include <asm/ds.h>
#include <asm/bugs.h>
+#include <asm/cpu.h>
#ifdef CONFIG_X86_64
#include <asm/topology.h>
}
#endif
+static void __cpuinit intel_smp_check(struct cpuinfo_x86 *c)
+{
+#ifdef CONFIG_SMP
+ /* calling is from identify_secondary_cpu() ? */
+ if (c->cpu_index == boot_cpu_id)
+ return;
+
+ /*
+ * Mask B, Pentium, but not Pentium MMX
+ */
+ if (c->x86 == 5 &&
+ c->x86_mask >= 1 && c->x86_mask <= 4 &&
+ c->x86_model <= 3) {
+ /*
+ * Remember we have B step Pentia with bugs
+ */
+ WARN_ONCE(1, "WARNING: SMP operation may be unreliable"
+ "with B stepping processors.\n");
+ }
+#endif
+}
+
static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c)
{
unsigned long lo, hi;
#ifdef CONFIG_X86_NUMAQ
numaq_tsc_disable();
#endif
+
+ intel_smp_check(c);
}
#else
static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c)
/*
* Get CPU information for use by the procfs.
*/
-#ifdef CONFIG_X86_32
static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c,
unsigned int cpu)
{
-#ifdef CONFIG_X86_HT
+#ifdef CONFIG_SMP
if (c->x86_max_cores * smp_num_siblings > 1) {
seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
seq_printf(m, "siblings\t: %d\n",
#endif
}
+#ifdef CONFIG_X86_32
static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c)
{
/*
c->wp_works_ok ? "yes" : "no");
}
#else
-static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c,
- unsigned int cpu)
-{
-#ifdef CONFIG_SMP
- if (c->x86_max_cores * smp_num_siblings > 1) {
- seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
- seq_printf(m, "siblings\t: %d\n",
- cpus_weight(per_cpu(cpu_core_map, cpu)));
- seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id);
- seq_printf(m, "cpu cores\t: %d\n", c->booted_cores);
- seq_printf(m, "apicid\t\t: %d\n", c->apicid);
- seq_printf(m, "initial apicid\t: %d\n", c->initial_apicid);
- }
-#endif
-}
-
static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c)
{
seq_printf(m,
spin_unlock_irqrestore(&ds_lock, irq);
- ds_write_config(tracer->ds.context, &tracer->trace.ds, ds_bts);
+ ds_write_config(tracer->ds.context, &tracer->trace.ds, ds_pebs);
ds_resume_pebs(tracer);
return tracer;
void ds_exit_thread(struct task_struct *tsk)
{
- WARN_ON(tsk->thread.ds_ctx);
}
efi_memory_desc_t *md;
efi_status_t status;
unsigned long size;
- u64 end, systab, addr, npages;
+ u64 end, systab, addr, npages, end_pfn;
void *p, *va;
efi.systab = NULL;
size = md->num_pages << EFI_PAGE_SHIFT;
end = md->phys_addr + size;
- if (PFN_UP(end) <= max_low_pfn_mapped)
+ end_pfn = PFN_UP(end);
+ if (end_pfn <= max_low_pfn_mapped
+ || (end_pfn > (1UL << (32 - PAGE_SHIFT))
+ && end_pfn <= max_pfn_mapped))
va = __va(md->phys_addr);
else
va = efi_ioremap(md->phys_addr, size);
void __iomem *__init efi_ioremap(unsigned long phys_addr, unsigned long size)
{
- static unsigned pages_mapped __initdata;
- unsigned i, pages;
- unsigned long offset;
+ unsigned long last_map_pfn;
- pages = PFN_UP(phys_addr + size) - PFN_DOWN(phys_addr);
- offset = phys_addr & ~PAGE_MASK;
- phys_addr &= PAGE_MASK;
-
- if (pages_mapped + pages > MAX_EFI_IO_PAGES)
+ last_map_pfn = init_memory_mapping(phys_addr, phys_addr + size);
+ if ((last_map_pfn << PAGE_SHIFT) < phys_addr + size)
return NULL;
- for (i = 0; i < pages; i++) {
- __set_fixmap(FIX_EFI_IO_MAP_FIRST_PAGE - pages_mapped,
- phys_addr, PAGE_KERNEL);
- phys_addr += PAGE_SIZE;
- pages_mapped++;
- }
-
- return (void __iomem *)__fix_to_virt(FIX_EFI_IO_MAP_FIRST_PAGE - \
- (pages_mapped - pages)) + offset;
+ return (void __iomem *)__va(phys_addr);
}
#endif
apicinterrupt LOCAL_TIMER_VECTOR \
apic_timer_interrupt smp_apic_timer_interrupt
+apicinterrupt GENERIC_INTERRUPT_VECTOR \
+ generic_interrupt smp_generic_interrupt
#ifdef CONFIG_SMP
apicinterrupt INVALIDATE_TLB_VECTOR_START+0 \
#ifdef CONFIG_X86_32
if (!HAVE_HWFP) {
memset(tsk->thread.xstate, 0, xstate_size);
- finit();
+ finit_task(tsk);
set_stopped_child_used_math(tsk);
return 0;
}
t->io_bitmap_max = bytes;
-#ifdef CONFIG_X86_32
- /*
- * Sets the lazy trigger so that the next I/O operation will
- * reload the correct bitmap.
- * Reset the owner so that a process switch will not set
- * tss->io_bitmap_base to IO_BITMAP_OFFSET.
- */
- tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
- tss->io_bitmap_owner = NULL;
-#else
/* Update the TSS: */
memcpy(tss->io_bitmap, t->io_bitmap_ptr, bytes_updated);
-#endif
put_cpu();
atomic_t irq_err_count;
+/* Function pointer for generic interrupt vector handling */
+void (*generic_interrupt_extension)(void) = NULL;
+
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
* each architecture has to answer this themselves.
seq_printf(p, "%10u ", irq_stats(j)->apic_timer_irqs);
seq_printf(p, " Local timer interrupts\n");
#endif
+ if (generic_interrupt_extension) {
+ seq_printf(p, "PLT: ");
+ for_each_online_cpu(j)
+ seq_printf(p, "%10u ", irq_stats(j)->generic_irqs);
+ seq_printf(p, " Platform interrupts\n");
+ }
#ifdef CONFIG_SMP
seq_printf(p, "RES: ");
for_each_online_cpu(j)
#ifdef CONFIG_X86_LOCAL_APIC
sum += irq_stats(cpu)->apic_timer_irqs;
#endif
+ if (generic_interrupt_extension)
+ sum += irq_stats(cpu)->generic_irqs;
#ifdef CONFIG_SMP
sum += irq_stats(cpu)->irq_resched_count;
sum += irq_stats(cpu)->irq_call_count;
return 1;
}
+/*
+ * Handler for GENERIC_INTERRUPT_VECTOR.
+ */
+void smp_generic_interrupt(struct pt_regs *regs)
+{
+ struct pt_regs *old_regs = set_irq_regs(regs);
+
+ ack_APIC_irq();
+
+ exit_idle();
+
+ irq_enter();
+
+ inc_irq_stat(generic_irqs);
+
+ if (generic_interrupt_extension)
+ generic_interrupt_extension();
+
+ irq_exit();
+
+ set_irq_regs(old_regs);
+}
+
EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq);
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/uaccess.h>
+#include <linux/percpu.h>
#include <asm/apic.h>
union irq_ctx {
struct thread_info tinfo;
u32 stack[THREAD_SIZE/sizeof(u32)];
-};
+} __attribute__((aligned(PAGE_SIZE)));
-static union irq_ctx *hardirq_ctx[NR_CPUS] __read_mostly;
-static union irq_ctx *softirq_ctx[NR_CPUS] __read_mostly;
+static DEFINE_PER_CPU(union irq_ctx *, hardirq_ctx);
+static DEFINE_PER_CPU(union irq_ctx *, softirq_ctx);
-static char softirq_stack[NR_CPUS * THREAD_SIZE] __page_aligned_bss;
-static char hardirq_stack[NR_CPUS * THREAD_SIZE] __page_aligned_bss;
+static DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, hardirq_stack);
+static DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, softirq_stack);
static void call_on_stack(void *func, void *stack)
{
u32 *isp, arg1, arg2;
curctx = (union irq_ctx *) current_thread_info();
- irqctx = hardirq_ctx[smp_processor_id()];
+ irqctx = __get_cpu_var(hardirq_ctx);
/*
* this is where we switch to the IRQ stack. However, if we are
{
union irq_ctx *irqctx;
- if (hardirq_ctx[cpu])
+ if (per_cpu(hardirq_ctx, cpu))
return;
- irqctx = (union irq_ctx*) &hardirq_stack[cpu*THREAD_SIZE];
+ irqctx = &per_cpu(hardirq_stack, cpu);
irqctx->tinfo.task = NULL;
irqctx->tinfo.exec_domain = NULL;
irqctx->tinfo.cpu = cpu;
irqctx->tinfo.preempt_count = HARDIRQ_OFFSET;
irqctx->tinfo.addr_limit = MAKE_MM_SEG(0);
- hardirq_ctx[cpu] = irqctx;
+ per_cpu(hardirq_ctx, cpu) = irqctx;
- irqctx = (union irq_ctx *) &softirq_stack[cpu*THREAD_SIZE];
+ irqctx = &per_cpu(softirq_stack, cpu);
irqctx->tinfo.task = NULL;
irqctx->tinfo.exec_domain = NULL;
irqctx->tinfo.cpu = cpu;
irqctx->tinfo.preempt_count = 0;
irqctx->tinfo.addr_limit = MAKE_MM_SEG(0);
- softirq_ctx[cpu] = irqctx;
+ per_cpu(softirq_ctx, cpu) = irqctx;
printk(KERN_DEBUG "CPU %u irqstacks, hard=%p soft=%p\n",
- cpu, hardirq_ctx[cpu], softirq_ctx[cpu]);
+ cpu, per_cpu(hardirq_ctx, cpu), per_cpu(softirq_ctx, cpu));
}
void irq_ctx_exit(int cpu)
{
- hardirq_ctx[cpu] = NULL;
+ per_cpu(hardirq_ctx, cpu) = NULL;
}
asmlinkage void do_softirq(void)
if (local_softirq_pending()) {
curctx = current_thread_info();
- irqctx = softirq_ctx[smp_processor_id()];
+ irqctx = __get_cpu_var(softirq_ctx);
irqctx->tinfo.task = curctx->task;
irqctx->tinfo.previous_esp = current_stack_pointer;
/* self generated IPI for local APIC timer */
alloc_intr_gate(LOCAL_TIMER_VECTOR, apic_timer_interrupt);
+ /* generic IPI for platform specific use */
+ alloc_intr_gate(GENERIC_INTERRUPT_VECTOR, generic_interrupt);
+
/* IPI vectors for APIC spurious and error interrupts */
alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt);
alloc_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
/* self generated IPI for local APIC timer */
alloc_intr_gate(LOCAL_TIMER_VECTOR, apic_timer_interrupt);
+ /* generic IPI for platform specific use */
+ alloc_intr_gate(GENERIC_INTERRUPT_VECTOR, generic_interrupt);
+
/* IPI vectors for APIC spurious and error interrupts */
alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt);
alloc_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
#include <linux/ftrace.h>
#include <linux/suspend.h>
#include <linux/gfp.h>
+#include <linux/io.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
-#include <asm/io.h>
#include <asm/apic.h>
#include <asm/cpufeature.h>
#include <asm/desc.h>
"\tmovl %%eax,%%fs\n"
"\tmovl %%eax,%%gs\n"
"\tmovl %%eax,%%ss\n"
- ::: "eax", "memory");
+ : : : "eax", "memory");
#undef STR
#undef __STR
}
if (image->preserve_context) {
#ifdef CONFIG_X86_IO_APIC
- /* We need to put APICs in legacy mode so that we can
+ /*
+ * We need to put APICs in legacy mode so that we can
* get timer interrupts in second kernel. kexec/kdump
* paths already have calls to disable_IO_APIC() in
* one form or other. kexec jump path also need
page_list[PA_SWAP_PAGE] = (page_to_pfn(image->swap_page)
<< PAGE_SHIFT);
- /* The segment registers are funny things, they have both a
+ /*
+ * The segment registers are funny things, they have both a
* visible and an invisible part. Whenever the visible part is
* set to a specific selector, the invisible part is loaded
* with from a table in memory. At no other time is the
* segments, before I zap the gdt with an invalid value.
*/
load_segments();
- /* The gdt & idt are now invalid.
+ /*
+ * The gdt & idt are now invalid.
* If you want to load them you must set up your own idt & gdt.
*/
- set_gdt(phys_to_virt(0),0);
- set_idt(phys_to_virt(0),0);
+ set_gdt(phys_to_virt(0), 0);
+ set_idt(phys_to_virt(0), 0);
/* now call it */
image->start = relocate_kernel_ptr((unsigned long)image->head,
#include <linux/reboot.h>
#include <linux/numa.h>
#include <linux/ftrace.h>
+#include <linux/io.h>
+#include <linux/suspend.h>
#include <asm/pgtable.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
-#include <asm/io.h>
+
+static int init_one_level2_page(struct kimage *image, pgd_t *pgd,
+ unsigned long addr)
+{
+ pud_t *pud;
+ pmd_t *pmd;
+ struct page *page;
+ int result = -ENOMEM;
+
+ addr &= PMD_MASK;
+ pgd += pgd_index(addr);
+ if (!pgd_present(*pgd)) {
+ page = kimage_alloc_control_pages(image, 0);
+ if (!page)
+ goto out;
+ pud = (pud_t *)page_address(page);
+ memset(pud, 0, PAGE_SIZE);
+ set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
+ }
+ pud = pud_offset(pgd, addr);
+ if (!pud_present(*pud)) {
+ page = kimage_alloc_control_pages(image, 0);
+ if (!page)
+ goto out;
+ pmd = (pmd_t *)page_address(page);
+ memset(pmd, 0, PAGE_SIZE);
+ set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
+ }
+ pmd = pmd_offset(pud, addr);
+ if (!pmd_present(*pmd))
+ set_pmd(pmd, __pmd(addr | __PAGE_KERNEL_LARGE_EXEC));
+ result = 0;
+out:
+ return result;
+}
static void init_level2_page(pmd_t *level2p, unsigned long addr)
{
}
level3p = (pud_t *)page_address(page);
result = init_level3_page(image, level3p, addr, last_addr);
- if (result) {
+ if (result)
goto out;
- }
set_pgd(level4p++, __pgd(__pa(level3p) | _KERNPG_TABLE));
addr += PGDIR_SIZE;
}
int result;
level4p = (pgd_t *)__va(start_pgtable);
result = init_level4_page(image, level4p, 0, max_pfn << PAGE_SHIFT);
+ if (result)
+ return result;
+ /*
+ * image->start may be outside 0 ~ max_pfn, for example when
+ * jump back to original kernel from kexeced kernel
+ */
+ result = init_one_level2_page(image, level4p, image->start);
if (result)
return result;
return init_transition_pgtable(image, level4p);
{
unsigned long page_list[PAGES_NR];
void *control_page;
+ int save_ftrace_enabled;
- tracer_disable();
+#ifdef CONFIG_KEXEC_JUMP
+ if (kexec_image->preserve_context)
+ save_processor_state();
+#endif
+
+ save_ftrace_enabled = __ftrace_enabled_save();
/* Interrupts aren't acceptable while we reboot */
local_irq_disable();
+ if (image->preserve_context) {
+#ifdef CONFIG_X86_IO_APIC
+ /*
+ * We need to put APICs in legacy mode so that we can
+ * get timer interrupts in second kernel. kexec/kdump
+ * paths already have calls to disable_IO_APIC() in
+ * one form or other. kexec jump path also need
+ * one.
+ */
+ disable_IO_APIC();
+#endif
+ }
+
control_page = page_address(image->control_code_page) + PAGE_SIZE;
- memcpy(control_page, relocate_kernel, PAGE_SIZE);
+ memcpy(control_page, relocate_kernel, KEXEC_CONTROL_CODE_MAX_SIZE);
page_list[PA_CONTROL_PAGE] = virt_to_phys(control_page);
+ page_list[VA_CONTROL_PAGE] = (unsigned long)control_page;
page_list[PA_TABLE_PAGE] =
(unsigned long)__pa(page_address(image->control_code_page));
- /* The segment registers are funny things, they have both a
+ if (image->type == KEXEC_TYPE_DEFAULT)
+ page_list[PA_SWAP_PAGE] = (page_to_pfn(image->swap_page)
+ << PAGE_SHIFT);
+
+ /*
+ * The segment registers are funny things, they have both a
* visible and an invisible part. Whenever the visible part is
* set to a specific selector, the invisible part is loaded
* with from a table in memory. At no other time is the
* segments, before I zap the gdt with an invalid value.
*/
load_segments();
- /* The gdt & idt are now invalid.
+ /*
+ * The gdt & idt are now invalid.
* If you want to load them you must set up your own idt & gdt.
*/
- set_gdt(phys_to_virt(0),0);
- set_idt(phys_to_virt(0),0);
+ set_gdt(phys_to_virt(0), 0);
+ set_idt(phys_to_virt(0), 0);
/* now call it */
- relocate_kernel((unsigned long)image->head, (unsigned long)page_list,
- image->start);
+ image->start = relocate_kernel((unsigned long)image->head,
+ (unsigned long)page_list,
+ image->start,
+ image->preserve_context);
+
+#ifdef CONFIG_KEXEC_JUMP
+ if (kexec_image->preserve_context)
+ restore_processor_state();
+#endif
+
+ __ftrace_enabled_restore(save_ftrace_enabled);
}
void arch_crash_save_vmcoreinfo(void)
static struct mpf_intel *mpf_found;
+static unsigned long __init get_mpc_size(unsigned long physptr)
+{
+ struct mpc_table *mpc;
+ unsigned long size;
+
+ mpc = early_ioremap(physptr, PAGE_SIZE);
+ size = mpc->length;
+ early_iounmap(mpc, PAGE_SIZE);
+ apic_printk(APIC_VERBOSE, " mpc: %lx-%lx\n", physptr, physptr + size);
+
+ return size;
+}
+
/*
* Scan the memory blocks for an SMP configuration block.
*/
construct_default_ISA_mptable(mpf->feature1);
} else if (mpf->physptr) {
+ struct mpc_table *mpc;
+ unsigned long size;
+ size = get_mpc_size(mpf->physptr);
+ mpc = early_ioremap(mpf->physptr, size);
/*
* Read the physical hardware table. Anything here will
* override the defaults.
*/
- if (!smp_read_mpc(phys_to_virt(mpf->physptr), early)) {
+ if (!smp_read_mpc(mpc, early)) {
#ifdef CONFIG_X86_LOCAL_APIC
smp_found_config = 0;
#endif
"BIOS bug, MP table errors detected!...\n");
printk(KERN_ERR "... disabling SMP support. "
"(tell your hw vendor)\n");
+ early_iounmap(mpc, size);
return;
}
+ early_iounmap(mpc, size);
if (early)
return;
if (!reserve)
return 1;
- reserve_bootmem_generic(virt_to_phys(mpf), PAGE_SIZE,
+ reserve_bootmem_generic(virt_to_phys(mpf), sizeof(*mpf),
BOOTMEM_DEFAULT);
if (mpf->physptr) {
- unsigned long size = PAGE_SIZE;
+ unsigned long size = get_mpc_size(mpf->physptr);
#ifdef CONFIG_X86_32
/*
* We cannot access to MPC table to compute
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/mm.h>
-#include <asm/idle.h>
#include <linux/smp.h>
+#include <linux/prctl.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/ftrace.h>
#include <asm/system.h>
#include <asm/apic.h>
+#include <asm/idle.h>
+#include <asm/uaccess.h>
+#include <asm/i387.h>
unsigned long idle_halt;
EXPORT_SYMBOL(idle_halt);
SLAB_PANIC, NULL);
}
+/*
+ * Free current thread data structures etc..
+ */
+void exit_thread(void)
+{
+ struct task_struct *me = current;
+ struct thread_struct *t = &me->thread;
+
+ if (me->thread.io_bitmap_ptr) {
+ struct tss_struct *tss = &per_cpu(init_tss, get_cpu());
+
+ kfree(t->io_bitmap_ptr);
+ t->io_bitmap_ptr = NULL;
+ clear_thread_flag(TIF_IO_BITMAP);
+ /*
+ * Careful, clear this in the TSS too:
+ */
+ memset(tss->io_bitmap, 0xff, t->io_bitmap_max);
+ t->io_bitmap_max = 0;
+ put_cpu();
+ }
+
+ ds_exit_thread(current);
+}
+
+void flush_thread(void)
+{
+ struct task_struct *tsk = current;
+
+#ifdef CONFIG_X86_64
+ if (test_tsk_thread_flag(tsk, TIF_ABI_PENDING)) {
+ clear_tsk_thread_flag(tsk, TIF_ABI_PENDING);
+ if (test_tsk_thread_flag(tsk, TIF_IA32)) {
+ clear_tsk_thread_flag(tsk, TIF_IA32);
+ } else {
+ set_tsk_thread_flag(tsk, TIF_IA32);
+ current_thread_info()->status |= TS_COMPAT;
+ }
+ }
+#endif
+
+ clear_tsk_thread_flag(tsk, TIF_DEBUG);
+
+ tsk->thread.debugreg0 = 0;
+ tsk->thread.debugreg1 = 0;
+ tsk->thread.debugreg2 = 0;
+ tsk->thread.debugreg3 = 0;
+ tsk->thread.debugreg6 = 0;
+ tsk->thread.debugreg7 = 0;
+ memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
+ /*
+ * Forget coprocessor state..
+ */
+ tsk->fpu_counter = 0;
+ clear_fpu(tsk);
+ clear_used_math();
+}
+
+static void hard_disable_TSC(void)
+{
+ write_cr4(read_cr4() | X86_CR4_TSD);
+}
+
+void disable_TSC(void)
+{
+ preempt_disable();
+ if (!test_and_set_thread_flag(TIF_NOTSC))
+ /*
+ * Must flip the CPU state synchronously with
+ * TIF_NOTSC in the current running context.
+ */
+ hard_disable_TSC();
+ preempt_enable();
+}
+
+static void hard_enable_TSC(void)
+{
+ write_cr4(read_cr4() & ~X86_CR4_TSD);
+}
+
+static void enable_TSC(void)
+{
+ preempt_disable();
+ if (test_and_clear_thread_flag(TIF_NOTSC))
+ /*
+ * Must flip the CPU state synchronously with
+ * TIF_NOTSC in the current running context.
+ */
+ hard_enable_TSC();
+ preempt_enable();
+}
+
+int get_tsc_mode(unsigned long adr)
+{
+ unsigned int val;
+
+ if (test_thread_flag(TIF_NOTSC))
+ val = PR_TSC_SIGSEGV;
+ else
+ val = PR_TSC_ENABLE;
+
+ return put_user(val, (unsigned int __user *)adr);
+}
+
+int set_tsc_mode(unsigned int val)
+{
+ if (val == PR_TSC_SIGSEGV)
+ disable_TSC();
+ else if (val == PR_TSC_ENABLE)
+ enable_TSC();
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
+ struct tss_struct *tss)
+{
+ struct thread_struct *prev, *next;
+
+ prev = &prev_p->thread;
+ next = &next_p->thread;
+
+ if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) ||
+ test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR))
+ ds_switch_to(prev_p, next_p);
+ else if (next->debugctlmsr != prev->debugctlmsr)
+ update_debugctlmsr(next->debugctlmsr);
+
+ if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
+ set_debugreg(next->debugreg0, 0);
+ set_debugreg(next->debugreg1, 1);
+ set_debugreg(next->debugreg2, 2);
+ set_debugreg(next->debugreg3, 3);
+ /* no 4 and 5 */
+ set_debugreg(next->debugreg6, 6);
+ set_debugreg(next->debugreg7, 7);
+ }
+
+ if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^
+ test_tsk_thread_flag(next_p, TIF_NOTSC)) {
+ /* prev and next are different */
+ if (test_tsk_thread_flag(next_p, TIF_NOTSC))
+ hard_disable_TSC();
+ else
+ hard_enable_TSC();
+ }
+
+ if (test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
+ /*
+ * Copy the relevant range of the IO bitmap.
+ * Normally this is 128 bytes or less:
+ */
+ memcpy(tss->io_bitmap, next->io_bitmap_ptr,
+ max(prev->io_bitmap_max, next->io_bitmap_max));
+ } else if (test_tsk_thread_flag(prev_p, TIF_IO_BITMAP)) {
+ /*
+ * Clear any possible leftover bits:
+ */
+ memset(tss->io_bitmap, 0xff, prev->io_bitmap_max);
+ }
+}
+
+int sys_fork(struct pt_regs *regs)
+{
+ return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
+}
+
+/*
+ * This is trivial, and on the face of it looks like it
+ * could equally well be done in user mode.
+ *
+ * Not so, for quite unobvious reasons - register pressure.
+ * In user mode vfork() cannot have a stack frame, and if
+ * done by calling the "clone()" system call directly, you
+ * do not have enough call-clobbered registers to hold all
+ * the information you need.
+ */
+int sys_vfork(struct pt_regs *regs)
+{
+ return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0,
+ NULL, NULL);
+}
+
+
/*
* Idle related variables and functions
*/
}
EXPORT_SYMBOL(kernel_thread);
-/*
- * Free current thread data structures etc..
- */
-void exit_thread(void)
-{
- /* The process may have allocated an io port bitmap... nuke it. */
- if (unlikely(test_thread_flag(TIF_IO_BITMAP))) {
- struct task_struct *tsk = current;
- struct thread_struct *t = &tsk->thread;
- int cpu = get_cpu();
- struct tss_struct *tss = &per_cpu(init_tss, cpu);
-
- kfree(t->io_bitmap_ptr);
- t->io_bitmap_ptr = NULL;
- clear_thread_flag(TIF_IO_BITMAP);
- /*
- * Careful, clear this in the TSS too:
- */
- memset(tss->io_bitmap, 0xff, tss->io_bitmap_max);
- t->io_bitmap_max = 0;
- tss->io_bitmap_owner = NULL;
- tss->io_bitmap_max = 0;
- tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET;
- put_cpu();
- }
-
- ds_exit_thread(current);
-}
-
-void flush_thread(void)
-{
- struct task_struct *tsk = current;
-
- tsk->thread.debugreg0 = 0;
- tsk->thread.debugreg1 = 0;
- tsk->thread.debugreg2 = 0;
- tsk->thread.debugreg3 = 0;
- tsk->thread.debugreg6 = 0;
- tsk->thread.debugreg7 = 0;
- memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
- clear_tsk_thread_flag(tsk, TIF_DEBUG);
- /*
- * Forget coprocessor state..
- */
- tsk->fpu_counter = 0;
- clear_fpu(tsk);
- clear_used_math();
-}
-
void release_thread(struct task_struct *dead_task)
{
BUG_ON(dead_task->mm);
}
EXPORT_SYMBOL_GPL(start_thread);
-static void hard_disable_TSC(void)
-{
- write_cr4(read_cr4() | X86_CR4_TSD);
-}
-
-void disable_TSC(void)
-{
- preempt_disable();
- if (!test_and_set_thread_flag(TIF_NOTSC))
- /*
- * Must flip the CPU state synchronously with
- * TIF_NOTSC in the current running context.
- */
- hard_disable_TSC();
- preempt_enable();
-}
-
-static void hard_enable_TSC(void)
-{
- write_cr4(read_cr4() & ~X86_CR4_TSD);
-}
-
-static void enable_TSC(void)
-{
- preempt_disable();
- if (test_and_clear_thread_flag(TIF_NOTSC))
- /*
- * Must flip the CPU state synchronously with
- * TIF_NOTSC in the current running context.
- */
- hard_enable_TSC();
- preempt_enable();
-}
-
-int get_tsc_mode(unsigned long adr)
-{
- unsigned int val;
-
- if (test_thread_flag(TIF_NOTSC))
- val = PR_TSC_SIGSEGV;
- else
- val = PR_TSC_ENABLE;
-
- return put_user(val, (unsigned int __user *)adr);
-}
-
-int set_tsc_mode(unsigned int val)
-{
- if (val == PR_TSC_SIGSEGV)
- disable_TSC();
- else if (val == PR_TSC_ENABLE)
- enable_TSC();
- else
- return -EINVAL;
-
- return 0;
-}
-
-static noinline void
-__switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
- struct tss_struct *tss)
-{
- struct thread_struct *prev, *next;
-
- prev = &prev_p->thread;
- next = &next_p->thread;
-
- if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) ||
- test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR))
- ds_switch_to(prev_p, next_p);
- else if (next->debugctlmsr != prev->debugctlmsr)
- update_debugctlmsr(next->debugctlmsr);
-
- if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
- set_debugreg(next->debugreg0, 0);
- set_debugreg(next->debugreg1, 1);
- set_debugreg(next->debugreg2, 2);
- set_debugreg(next->debugreg3, 3);
- /* no 4 and 5 */
- set_debugreg(next->debugreg6, 6);
- set_debugreg(next->debugreg7, 7);
- }
-
- if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^
- test_tsk_thread_flag(next_p, TIF_NOTSC)) {
- /* prev and next are different */
- if (test_tsk_thread_flag(next_p, TIF_NOTSC))
- hard_disable_TSC();
- else
- hard_enable_TSC();
- }
-
- if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
- /*
- * Disable the bitmap via an invalid offset. We still cache
- * the previous bitmap owner and the IO bitmap contents:
- */
- tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET;
- return;
- }
-
- if (likely(next == tss->io_bitmap_owner)) {
- /*
- * Previous owner of the bitmap (hence the bitmap content)
- * matches the next task, we dont have to do anything but
- * to set a valid offset in the TSS:
- */
- tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
- return;
- }
- /*
- * Lazy TSS's I/O bitmap copy. We set an invalid offset here
- * and we let the task to get a GPF in case an I/O instruction
- * is performed. The handler of the GPF will verify that the
- * faulting task has a valid I/O bitmap and, it true, does the
- * real copy and restart the instruction. This will save us
- * redundant copies when the currently switched task does not
- * perform any I/O during its timeslice.
- */
- tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
-}
/*
* switch_to(x,yn) should switch tasks from x to y.
return prev_p;
}
-int sys_fork(struct pt_regs *regs)
-{
- return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
-}
-
int sys_clone(struct pt_regs *regs)
{
unsigned long clone_flags;
return do_fork(clone_flags, newsp, regs, 0, parent_tidptr, child_tidptr);
}
-/*
- * This is trivial, and on the face of it looks like it
- * could equally well be done in user mode.
- *
- * Not so, for quite unobvious reasons - register pressure.
- * In user mode vfork() cannot have a stack frame, and if
- * done by calling the "clone()" system call directly, you
- * do not have enough call-clobbered registers to hold all
- * the information you need.
- */
-int sys_vfork(struct pt_regs *regs)
-{
- return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0, NULL, NULL);
-}
-
/*
* sys_execve() executes a new program.
*/
show_trace(NULL, regs, (void *)(regs + 1), regs->bp);
}
-/*
- * Free current thread data structures etc..
- */
-void exit_thread(void)
-{
- struct task_struct *me = current;
- struct thread_struct *t = &me->thread;
-
- if (me->thread.io_bitmap_ptr) {
- struct tss_struct *tss = &per_cpu(init_tss, get_cpu());
-
- kfree(t->io_bitmap_ptr);
- t->io_bitmap_ptr = NULL;
- clear_thread_flag(TIF_IO_BITMAP);
- /*
- * Careful, clear this in the TSS too:
- */
- memset(tss->io_bitmap, 0xff, t->io_bitmap_max);
- t->io_bitmap_max = 0;
- put_cpu();
- }
-
- ds_exit_thread(current);
-}
-
-void flush_thread(void)
-{
- struct task_struct *tsk = current;
-
- if (test_tsk_thread_flag(tsk, TIF_ABI_PENDING)) {
- clear_tsk_thread_flag(tsk, TIF_ABI_PENDING);
- if (test_tsk_thread_flag(tsk, TIF_IA32)) {
- clear_tsk_thread_flag(tsk, TIF_IA32);
- } else {
- set_tsk_thread_flag(tsk, TIF_IA32);
- current_thread_info()->status |= TS_COMPAT;
- }
- }
- clear_tsk_thread_flag(tsk, TIF_DEBUG);
-
- tsk->thread.debugreg0 = 0;
- tsk->thread.debugreg1 = 0;
- tsk->thread.debugreg2 = 0;
- tsk->thread.debugreg3 = 0;
- tsk->thread.debugreg6 = 0;
- tsk->thread.debugreg7 = 0;
- memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
- /*
- * Forget coprocessor state..
- */
- tsk->fpu_counter = 0;
- clear_fpu(tsk);
- clear_used_math();
-}
-
void release_thread(struct task_struct *dead_task)
{
if (dead_task->mm) {
}
EXPORT_SYMBOL_GPL(start_thread);
-static void hard_disable_TSC(void)
-{
- write_cr4(read_cr4() | X86_CR4_TSD);
-}
-
-void disable_TSC(void)
-{
- preempt_disable();
- if (!test_and_set_thread_flag(TIF_NOTSC))
- /*
- * Must flip the CPU state synchronously with
- * TIF_NOTSC in the current running context.
- */
- hard_disable_TSC();
- preempt_enable();
-}
-
-static void hard_enable_TSC(void)
-{
- write_cr4(read_cr4() & ~X86_CR4_TSD);
-}
-
-static void enable_TSC(void)
-{
- preempt_disable();
- if (test_and_clear_thread_flag(TIF_NOTSC))
- /*
- * Must flip the CPU state synchronously with
- * TIF_NOTSC in the current running context.
- */
- hard_enable_TSC();
- preempt_enable();
-}
-
-int get_tsc_mode(unsigned long adr)
-{
- unsigned int val;
-
- if (test_thread_flag(TIF_NOTSC))
- val = PR_TSC_SIGSEGV;
- else
- val = PR_TSC_ENABLE;
-
- return put_user(val, (unsigned int __user *)adr);
-}
-
-int set_tsc_mode(unsigned int val)
-{
- if (val == PR_TSC_SIGSEGV)
- disable_TSC();
- else if (val == PR_TSC_ENABLE)
- enable_TSC();
- else
- return -EINVAL;
-
- return 0;
-}
-
-/*
- * This special macro can be used to load a debugging register
- */
-#define loaddebug(thread, r) set_debugreg(thread->debugreg ## r, r)
-
-static inline void __switch_to_xtra(struct task_struct *prev_p,
- struct task_struct *next_p,
- struct tss_struct *tss)
-{
- struct thread_struct *prev, *next;
-
- prev = &prev_p->thread,
- next = &next_p->thread;
-
- if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) ||
- test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR))
- ds_switch_to(prev_p, next_p);
- else if (next->debugctlmsr != prev->debugctlmsr)
- update_debugctlmsr(next->debugctlmsr);
-
- if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
- loaddebug(next, 0);
- loaddebug(next, 1);
- loaddebug(next, 2);
- loaddebug(next, 3);
- /* no 4 and 5 */
- loaddebug(next, 6);
- loaddebug(next, 7);
- }
-
- if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^
- test_tsk_thread_flag(next_p, TIF_NOTSC)) {
- /* prev and next are different */
- if (test_tsk_thread_flag(next_p, TIF_NOTSC))
- hard_disable_TSC();
- else
- hard_enable_TSC();
- }
-
- if (test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
- /*
- * Copy the relevant range of the IO bitmap.
- * Normally this is 128 bytes or less:
- */
- memcpy(tss->io_bitmap, next->io_bitmap_ptr,
- max(prev->io_bitmap_max, next->io_bitmap_max));
- } else if (test_tsk_thread_flag(prev_p, TIF_IO_BITMAP)) {
- /*
- * Clear any possible leftover bits:
- */
- memset(tss->io_bitmap, 0xff, prev->io_bitmap_max);
- }
-}
-
/*
* switch_to(x,y) should switch tasks from x to y.
*
current->personality &= ~READ_IMPLIES_EXEC;
}
-asmlinkage long sys_fork(struct pt_regs *regs)
-{
- return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
-}
-
asmlinkage long
sys_clone(unsigned long clone_flags, unsigned long newsp,
void __user *parent_tid, void __user *child_tid, struct pt_regs *regs)
return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid);
}
-/*
- * This is trivial, and on the face of it looks like it
- * could equally well be done in user mode.
- *
- * Not so, for quite unobvious reasons - register pressure.
- * In user mode vfork() cannot have a stack frame, and if
- * done by calling the "clone()" system call directly, you
- * do not have enough call-clobbered registers to hold all
- * the information you need.
- */
-asmlinkage long sys_vfork(struct pt_regs *regs)
-{
- return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0,
- NULL, NULL);
-}
-
unsigned long get_wchan(struct task_struct *p)
{
unsigned long stack;
#ifdef CONFIG_X86_32
# define IS_IA32 1
#elif defined CONFIG_IA32_EMULATION
-# define IS_IA32 test_thread_flag(TIF_IA32)
+# define IS_IA32 is_compat_task()
#else
# define IS_IA32 0
#endif
DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq"),
},
},
+ { /* Handle problems with rebooting on Dell XPS710 */
+ .callback = set_bios_reboot,
+ .ident = "Dell XPS710",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Dell XPS710"),
+ },
+ },
{ }
};
#define PTR(x) (x << 2)
-/* control_page + KEXEC_CONTROL_CODE_MAX_SIZE
+/*
+ * control_page + KEXEC_CONTROL_CODE_MAX_SIZE
* ~ control_page + PAGE_SIZE are used as data storage and stack for
* jumping back
*/
movl %eax, CP_PA_SWAP_PAGE(%edi)
movl %ebx, CP_PA_BACKUP_PAGES_MAP(%edi)
- /* get physical address of control page now */
- /* this is impossible after page table switch */
+ /*
+ * get physical address of control page now
+ * this is impossible after page table switch
+ */
movl PTR(PA_CONTROL_PAGE)(%ebp), %edi
/* switch to new set of page tables */
/* store the start address on the stack */
pushl %edx
- /* Set cr0 to a known state:
+ /*
+ * Set cr0 to a known state:
* - Paging disabled
* - Alignment check disabled
* - Write protect disabled
/* clear cr4 if applicable */
testl %ecx, %ecx
jz 1f
- /* Set cr4 to a known state:
+ /*
+ * Set cr4 to a known state:
* Setting everything to zero seems safe.
*/
xorl %eax, %eax
call swap_pages
addl $8, %esp
- /* To be certain of avoiding problems with self-modifying code
+ /*
+ * To be certain of avoiding problems with self-modifying code
* I need to execute a serializing instruction here.
* So I flush the TLB, it's handy, and not processor dependent.
*/
xorl %eax, %eax
movl %eax, %cr3
- /* set all of the registers to known values */
- /* leave %esp alone */
+ /*
+ * set all of the registers to known values
+ * leave %esp alone
+ */
testl %esi, %esi
jnz 1f
#define PTR(x) (x << 3)
#define PAGE_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
+/*
+ * control_page + KEXEC_CONTROL_CODE_MAX_SIZE
+ * ~ control_page + PAGE_SIZE are used as data storage and stack for
+ * jumping back
+ */
+#define DATA(offset) (KEXEC_CONTROL_CODE_MAX_SIZE+(offset))
+
+/* Minimal CPU state */
+#define RSP DATA(0x0)
+#define CR0 DATA(0x8)
+#define CR3 DATA(0x10)
+#define CR4 DATA(0x18)
+
+/* other data */
+#define CP_PA_TABLE_PAGE DATA(0x20)
+#define CP_PA_SWAP_PAGE DATA(0x28)
+#define CP_PA_BACKUP_PAGES_MAP DATA(0x30)
+
.text
.align PAGE_SIZE
.code64
.globl relocate_kernel
relocate_kernel:
- /* %rdi indirection_page
+ /*
+ * %rdi indirection_page
* %rsi page_list
* %rdx start address
+ * %rcx preserve_context
*/
+ /* Save the CPU context, used for jumping back */
+ pushq %rbx
+ pushq %rbp
+ pushq %r12
+ pushq %r13
+ pushq %r14
+ pushq %r15
+ pushf
+
+ movq PTR(VA_CONTROL_PAGE)(%rsi), %r11
+ movq %rsp, RSP(%r11)
+ movq %cr0, %rax
+ movq %rax, CR0(%r11)
+ movq %cr3, %rax
+ movq %rax, CR3(%r11)
+ movq %cr4, %rax
+ movq %rax, CR4(%r11)
+
/* zero out flags, and disable interrupts */
pushq $0
popfq
- /* get physical address of control page now */
- /* this is impossible after page table switch */
+ /*
+ * get physical address of control page now
+ * this is impossible after page table switch
+ */
movq PTR(PA_CONTROL_PAGE)(%rsi), %r8
/* get physical address of page table now too */
- movq PTR(PA_TABLE_PAGE)(%rsi), %rcx
+ movq PTR(PA_TABLE_PAGE)(%rsi), %r9
+
+ /* get physical address of swap page now */
+ movq PTR(PA_SWAP_PAGE)(%rsi), %r10
+
+ /* save some information for jumping back */
+ movq %r9, CP_PA_TABLE_PAGE(%r11)
+ movq %r10, CP_PA_SWAP_PAGE(%r11)
+ movq %rdi, CP_PA_BACKUP_PAGES_MAP(%r11)
/* Switch to the identity mapped page tables */
- movq %rcx, %cr3
+ movq %r9, %cr3
/* setup a new stack at the end of the physical control page */
lea PAGE_SIZE(%r8), %rsp
/* store the start address on the stack */
pushq %rdx
- /* Set cr0 to a known state:
+ /*
+ * Set cr0 to a known state:
* - Paging enabled
* - Alignment check disabled
* - Write protect disabled
orl $(X86_CR0_PG | X86_CR0_PE), %eax
movq %rax, %cr0
- /* Set cr4 to a known state:
+ /*
+ * Set cr4 to a known state:
* - physical address extension enabled
*/
movq $X86_CR4_PAE, %rax
1:
/* Flush the TLB (needed?) */
- movq %rcx, %cr3
+ movq %r9, %cr3
+
+ movq %rcx, %r11
+ call swap_pages
+
+ /*
+ * To be certain of avoiding problems with self-modifying code
+ * I need to execute a serializing instruction here.
+ * So I flush the TLB by reloading %cr3 here, it's handy,
+ * and not processor dependent.
+ */
+ movq %cr3, %rax
+ movq %rax, %cr3
+
+ /*
+ * set all of the registers to known values
+ * leave %rsp alone
+ */
+
+ testq %r11, %r11
+ jnz 1f
+ xorq %rax, %rax
+ xorq %rbx, %rbx
+ xorq %rcx, %rcx
+ xorq %rdx, %rdx
+ xorq %rsi, %rsi
+ xorq %rdi, %rdi
+ xorq %rbp, %rbp
+ xorq %r8, %r8
+ xorq %r9, %r9
+ xorq %r10, %r9
+ xorq %r11, %r11
+ xorq %r12, %r12
+ xorq %r13, %r13
+ xorq %r14, %r14
+ xorq %r15, %r15
+
+ ret
+
+1:
+ popq %rdx
+ leaq PAGE_SIZE(%r10), %rsp
+ call *%rdx
+
+ /* get the re-entry point of the peer system */
+ movq 0(%rsp), %rbp
+ call 1f
+1:
+ popq %r8
+ subq $(1b - relocate_kernel), %r8
+ movq CP_PA_SWAP_PAGE(%r8), %r10
+ movq CP_PA_BACKUP_PAGES_MAP(%r8), %rdi
+ movq CP_PA_TABLE_PAGE(%r8), %rax
+ movq %rax, %cr3
+ lea PAGE_SIZE(%r8), %rsp
+ call swap_pages
+ movq $virtual_mapped, %rax
+ pushq %rax
+ ret
+
+virtual_mapped:
+ movq RSP(%r8), %rsp
+ movq CR4(%r8), %rax
+ movq %rax, %cr4
+ movq CR3(%r8), %rax
+ movq CR0(%r8), %r8
+ movq %rax, %cr3
+ movq %r8, %cr0
+ movq %rbp, %rax
+
+ popf
+ popq %r15
+ popq %r14
+ popq %r13
+ popq %r12
+ popq %rbp
+ popq %rbx
+ ret
/* Do the copies */
+swap_pages:
movq %rdi, %rcx /* Put the page_list in %rcx */
xorq %rdi, %rdi
xorq %rsi, %rsi
movq %rcx, %rsi /* For ever source page do a copy */
andq $0xfffffffffffff000, %rsi
+ movq %rdi, %rdx
+ movq %rsi, %rax
+
+ movq %r10, %rdi
movq $512, %rcx
rep ; movsq
- jmp 0b
-3:
-
- /* To be certain of avoiding problems with self-modifying code
- * I need to execute a serializing instruction here.
- * So I flush the TLB by reloading %cr3 here, it's handy,
- * and not processor dependent.
- */
- movq %cr3, %rax
- movq %rax, %cr3
- /* set all of the registers to known values */
- /* leave %rsp alone */
+ movq %rax, %rdi
+ movq %rdx, %rsi
+ movq $512, %rcx
+ rep ; movsq
- xorq %rax, %rax
- xorq %rbx, %rbx
- xorq %rcx, %rcx
- xorq %rdx, %rdx
- xorq %rsi, %rsi
- xorq %rdi, %rdi
- xorq %rbp, %rbp
- xorq %r8, %r8
- xorq %r9, %r9
- xorq %r10, %r9
- xorq %r11, %r11
- xorq %r12, %r12
- xorq %r13, %r13
- xorq %r14, %r14
- xorq %r15, %r15
+ movq %rdx, %rdi
+ movq %r10, %rsi
+ movq $512, %rcx
+ rep ; movsq
+ lea PAGE_SIZE(%rax), %rsi
+ jmp 0b
+3:
ret
+
+ .globl kexec_control_code_size
+.set kexec_control_code_size, . - relocate_kernel
#endif
#else
-struct cpuinfo_x86 boot_cpu_data __read_mostly;
+struct cpuinfo_x86 boot_cpu_data __read_mostly = {
+ .x86_phys_bits = MAX_PHYSMEM_BITS,
+};
EXPORT_SYMBOL(boot_cpu_data);
#endif
early_param("elfcorehdr", setup_elfcorehdr);
#endif
-static int __init default_update_apic(void)
-{
-#ifdef CONFIG_SMP
- if (!apic->wakeup_cpu)
- apic->wakeup_cpu = wakeup_secondary_cpu_via_init;
-#endif
-
- return 0;
-}
-
-static struct x86_quirks default_x86_quirks __initdata = {
- .update_apic = default_update_apic,
-};
+static struct x86_quirks default_x86_quirks __initdata;
struct x86_quirks *x86_quirks __initdata = &default_x86_quirks;
finish_e820_parsing();
+ if (efi_enabled)
+ efi_init();
+
dmi_scan_machine();
dmi_check_system(bad_bios_dmi_table);
insert_resource(&iomem_resource, &data_resource);
insert_resource(&iomem_resource, &bss_resource);
- if (efi_enabled)
- efi_init();
#ifdef CONFIG_X86_32
if (ppro_with_ram_bug()) {
reserve_initrd();
-#ifdef CONFIG_X86_64
vsmp_init();
-#endif
io_delay_init();
#include <linux/crash_dump.h>
#include <linux/smp.h>
#include <linux/topology.h>
+#include <linux/pfn.h>
#include <asm/sections.h>
#include <asm/processor.h>
#include <asm/setup.h>
};
EXPORT_SYMBOL(__per_cpu_offset);
+/*
+ * On x86_64 symbols referenced from code should be reachable using
+ * 32bit relocations. Reserve space for static percpu variables in
+ * modules so that they are always served from the first chunk which
+ * is located at the percpu segment base. On x86_32, anything can
+ * address anywhere. No need to reserve space in the first chunk.
+ */
+#ifdef CONFIG_X86_64
+#define PERCPU_FIRST_CHUNK_RESERVE PERCPU_MODULE_RESERVE
+#else
+#define PERCPU_FIRST_CHUNK_RESERVE 0
+#endif
+
+/**
+ * pcpu_need_numa - determine percpu allocation needs to consider NUMA
+ *
+ * If NUMA is not configured or there is only one NUMA node available,
+ * there is no reason to consider NUMA. This function determines
+ * whether percpu allocation should consider NUMA or not.
+ *
+ * RETURNS:
+ * true if NUMA should be considered; otherwise, false.
+ */
+static bool __init pcpu_need_numa(void)
+{
+#ifdef CONFIG_NEED_MULTIPLE_NODES
+ pg_data_t *last = NULL;
+ unsigned int cpu;
+
+ for_each_possible_cpu(cpu) {
+ int node = early_cpu_to_node(cpu);
+
+ if (node_online(node) && NODE_DATA(node) &&
+ last && last != NODE_DATA(node))
+ return true;
+
+ last = NODE_DATA(node);
+ }
+#endif
+ return false;
+}
+
+/**
+ * pcpu_alloc_bootmem - NUMA friendly alloc_bootmem wrapper for percpu
+ * @cpu: cpu to allocate for
+ * @size: size allocation in bytes
+ * @align: alignment
+ *
+ * Allocate @size bytes aligned at @align for cpu @cpu. This wrapper
+ * does the right thing for NUMA regardless of the current
+ * configuration.
+ *
+ * RETURNS:
+ * Pointer to the allocated area on success, NULL on failure.
+ */
+static void * __init pcpu_alloc_bootmem(unsigned int cpu, unsigned long size,
+ unsigned long align)
+{
+ const unsigned long goal = __pa(MAX_DMA_ADDRESS);
+#ifdef CONFIG_NEED_MULTIPLE_NODES
+ int node = early_cpu_to_node(cpu);
+ void *ptr;
+
+ if (!node_online(node) || !NODE_DATA(node)) {
+ ptr = __alloc_bootmem_nopanic(size, align, goal);
+ pr_info("cpu %d has no node %d or node-local memory\n",
+ cpu, node);
+ pr_debug("per cpu data for cpu%d %lu bytes at %016lx\n",
+ cpu, size, __pa(ptr));
+ } else {
+ ptr = __alloc_bootmem_node_nopanic(NODE_DATA(node),
+ size, align, goal);
+ pr_debug("per cpu data for cpu%d %lu bytes on node%d at "
+ "%016lx\n", cpu, size, node, __pa(ptr));
+ }
+ return ptr;
+#else
+ return __alloc_bootmem_nopanic(size, align, goal);
+#endif
+}
+
+/*
+ * Remap allocator
+ *
+ * This allocator uses PMD page as unit. A PMD page is allocated for
+ * each cpu and each is remapped into vmalloc area using PMD mapping.
+ * As PMD page is quite large, only part of it is used for the first
+ * chunk. Unused part is returned to the bootmem allocator.
+ *
+ * So, the PMD pages are mapped twice - once to the physical mapping
+ * and to the vmalloc area for the first percpu chunk. The double
+ * mapping does add one more PMD TLB entry pressure but still is much
+ * better than only using 4k mappings while still being NUMA friendly.
+ */
+#ifdef CONFIG_NEED_MULTIPLE_NODES
+static size_t pcpur_size __initdata;
+static void **pcpur_ptrs __initdata;
+
+static struct page * __init pcpur_get_page(unsigned int cpu, int pageno)
+{
+ size_t off = (size_t)pageno << PAGE_SHIFT;
+
+ if (off >= pcpur_size)
+ return NULL;
+
+ return virt_to_page(pcpur_ptrs[cpu] + off);
+}
+
+static ssize_t __init setup_pcpu_remap(size_t static_size)
+{
+ static struct vm_struct vm;
+ pg_data_t *last;
+ size_t ptrs_size, dyn_size;
+ unsigned int cpu;
+ ssize_t ret;
+
+ /*
+ * If large page isn't supported, there's no benefit in doing
+ * this. Also, on non-NUMA, embedding is better.
+ */
+ if (!cpu_has_pse || pcpu_need_numa())
+ return -EINVAL;
+
+ last = NULL;
+ for_each_possible_cpu(cpu) {
+ int node = early_cpu_to_node(cpu);
+
+ if (node_online(node) && NODE_DATA(node) &&
+ last && last != NODE_DATA(node))
+ goto proceed;
+
+ last = NODE_DATA(node);
+ }
+ return -EINVAL;
+
+proceed:
+ /*
+ * Currently supports only single page. Supporting multiple
+ * pages won't be too difficult if it ever becomes necessary.
+ */
+ pcpur_size = PFN_ALIGN(static_size + PERCPU_MODULE_RESERVE +
+ PERCPU_DYNAMIC_RESERVE);
+ if (pcpur_size > PMD_SIZE) {
+ pr_warning("PERCPU: static data is larger than large page, "
+ "can't use large page\n");
+ return -EINVAL;
+ }
+ dyn_size = pcpur_size - static_size - PERCPU_FIRST_CHUNK_RESERVE;
+
+ /* allocate pointer array and alloc large pages */
+ ptrs_size = PFN_ALIGN(num_possible_cpus() * sizeof(pcpur_ptrs[0]));
+ pcpur_ptrs = alloc_bootmem(ptrs_size);
+
+ for_each_possible_cpu(cpu) {
+ pcpur_ptrs[cpu] = pcpu_alloc_bootmem(cpu, PMD_SIZE, PMD_SIZE);
+ if (!pcpur_ptrs[cpu])
+ goto enomem;
+
+ /*
+ * Only use pcpur_size bytes and give back the rest.
+ *
+ * Ingo: The 2MB up-rounding bootmem is needed to make
+ * sure the partial 2MB page is still fully RAM - it's
+ * not well-specified to have a PAT-incompatible area
+ * (unmapped RAM, device memory, etc.) in that hole.
+ */
+ free_bootmem(__pa(pcpur_ptrs[cpu] + pcpur_size),
+ PMD_SIZE - pcpur_size);
+
+ memcpy(pcpur_ptrs[cpu], __per_cpu_load, static_size);
+ }
+
+ /* allocate address and map */
+ vm.flags = VM_ALLOC;
+ vm.size = num_possible_cpus() * PMD_SIZE;
+ vm_area_register_early(&vm, PMD_SIZE);
+
+ for_each_possible_cpu(cpu) {
+ pmd_t *pmd;
+
+ pmd = populate_extra_pmd((unsigned long)vm.addr
+ + cpu * PMD_SIZE);
+ set_pmd(pmd, pfn_pmd(page_to_pfn(virt_to_page(pcpur_ptrs[cpu])),
+ PAGE_KERNEL_LARGE));
+ }
+
+ /* we're ready, commit */
+ pr_info("PERCPU: Remapped at %p with large pages, static data "
+ "%zu bytes\n", vm.addr, static_size);
+
+ ret = pcpu_setup_first_chunk(pcpur_get_page, static_size,
+ PERCPU_FIRST_CHUNK_RESERVE,
+ PMD_SIZE, dyn_size, vm.addr, NULL);
+ goto out_free_ar;
+
+enomem:
+ for_each_possible_cpu(cpu)
+ if (pcpur_ptrs[cpu])
+ free_bootmem(__pa(pcpur_ptrs[cpu]), PMD_SIZE);
+ ret = -ENOMEM;
+out_free_ar:
+ free_bootmem(__pa(pcpur_ptrs), ptrs_size);
+ return ret;
+}
+#else
+static ssize_t __init setup_pcpu_remap(size_t static_size)
+{
+ return -EINVAL;
+}
+#endif
+
+/*
+ * Embedding allocator
+ *
+ * The first chunk is sized to just contain the static area plus
+ * module and dynamic reserves, and allocated as a contiguous area
+ * using bootmem allocator and used as-is without being mapped into
+ * vmalloc area. This enables the first chunk to piggy back on the
+ * linear physical PMD mapping and doesn't add any additional pressure
+ * to TLB. Note that if the needed size is smaller than the minimum
+ * unit size, the leftover is returned to the bootmem allocator.
+ */
+static void *pcpue_ptr __initdata;
+static size_t pcpue_size __initdata;
+static size_t pcpue_unit_size __initdata;
+
+static struct page * __init pcpue_get_page(unsigned int cpu, int pageno)
+{
+ size_t off = (size_t)pageno << PAGE_SHIFT;
+
+ if (off >= pcpue_size)
+ return NULL;
+
+ return virt_to_page(pcpue_ptr + cpu * pcpue_unit_size + off);
+}
+
+static ssize_t __init setup_pcpu_embed(size_t static_size)
+{
+ unsigned int cpu;
+ size_t dyn_size;
+
+ /*
+ * If large page isn't supported, there's no benefit in doing
+ * this. Also, embedding allocation doesn't play well with
+ * NUMA.
+ */
+ if (!cpu_has_pse || pcpu_need_numa())
+ return -EINVAL;
+
+ /* allocate and copy */
+ pcpue_size = PFN_ALIGN(static_size + PERCPU_MODULE_RESERVE +
+ PERCPU_DYNAMIC_RESERVE);
+ pcpue_unit_size = max_t(size_t, pcpue_size, PCPU_MIN_UNIT_SIZE);
+ dyn_size = pcpue_size - static_size - PERCPU_FIRST_CHUNK_RESERVE;
+
+ pcpue_ptr = pcpu_alloc_bootmem(0, num_possible_cpus() * pcpue_unit_size,
+ PAGE_SIZE);
+ if (!pcpue_ptr)
+ return -ENOMEM;
+
+ for_each_possible_cpu(cpu) {
+ void *ptr = pcpue_ptr + cpu * pcpue_unit_size;
+
+ free_bootmem(__pa(ptr + pcpue_size),
+ pcpue_unit_size - pcpue_size);
+ memcpy(ptr, __per_cpu_load, static_size);
+ }
+
+ /* we're ready, commit */
+ pr_info("PERCPU: Embedded %zu pages at %p, static data %zu bytes\n",
+ pcpue_size >> PAGE_SHIFT, pcpue_ptr, static_size);
+
+ return pcpu_setup_first_chunk(pcpue_get_page, static_size,
+ PERCPU_FIRST_CHUNK_RESERVE,
+ pcpue_unit_size, dyn_size,
+ pcpue_ptr, NULL);
+}
+
+/*
+ * 4k page allocator
+ *
+ * This is the basic allocator. Static percpu area is allocated
+ * page-by-page and most of initialization is done by the generic
+ * setup function.
+ */
+static struct page **pcpu4k_pages __initdata;
+static int pcpu4k_nr_static_pages __initdata;
+
+static struct page * __init pcpu4k_get_page(unsigned int cpu, int pageno)
+{
+ if (pageno < pcpu4k_nr_static_pages)
+ return pcpu4k_pages[cpu * pcpu4k_nr_static_pages + pageno];
+ return NULL;
+}
+
+static void __init pcpu4k_populate_pte(unsigned long addr)
+{
+ populate_extra_pte(addr);
+}
+
+static ssize_t __init setup_pcpu_4k(size_t static_size)
+{
+ size_t pages_size;
+ unsigned int cpu;
+ int i, j;
+ ssize_t ret;
+
+ pcpu4k_nr_static_pages = PFN_UP(static_size);
+
+ /* unaligned allocations can't be freed, round up to page size */
+ pages_size = PFN_ALIGN(pcpu4k_nr_static_pages * num_possible_cpus()
+ * sizeof(pcpu4k_pages[0]));
+ pcpu4k_pages = alloc_bootmem(pages_size);
+
+ /* allocate and copy */
+ j = 0;
+ for_each_possible_cpu(cpu)
+ for (i = 0; i < pcpu4k_nr_static_pages; i++) {
+ void *ptr;
+
+ ptr = pcpu_alloc_bootmem(cpu, PAGE_SIZE, PAGE_SIZE);
+ if (!ptr)
+ goto enomem;
+
+ memcpy(ptr, __per_cpu_load + i * PAGE_SIZE, PAGE_SIZE);
+ pcpu4k_pages[j++] = virt_to_page(ptr);
+ }
+
+ /* we're ready, commit */
+ pr_info("PERCPU: Allocated %d 4k pages, static data %zu bytes\n",
+ pcpu4k_nr_static_pages, static_size);
+
+ ret = pcpu_setup_first_chunk(pcpu4k_get_page, static_size,
+ PERCPU_FIRST_CHUNK_RESERVE, -1, -1, NULL,
+ pcpu4k_populate_pte);
+ goto out_free_ar;
+
+enomem:
+ while (--j >= 0)
+ free_bootmem(__pa(page_address(pcpu4k_pages[j])), PAGE_SIZE);
+ ret = -ENOMEM;
+out_free_ar:
+ free_bootmem(__pa(pcpu4k_pages), pages_size);
+ return ret;
+}
+
static inline void setup_percpu_segment(int cpu)
{
#ifdef CONFIG_X86_32
*/
void __init setup_per_cpu_areas(void)
{
- ssize_t size;
- char *ptr;
- int cpu;
-
- /* Copy section for each CPU (we discard the original) */
- size = roundup(PERCPU_ENOUGH_ROOM, PAGE_SIZE);
+ size_t static_size = __per_cpu_end - __per_cpu_start;
+ unsigned int cpu;
+ unsigned long delta;
+ size_t pcpu_unit_size;
+ ssize_t ret;
pr_info("NR_CPUS:%d nr_cpumask_bits:%d nr_cpu_ids:%d nr_node_ids:%d\n",
NR_CPUS, nr_cpumask_bits, nr_cpu_ids, nr_node_ids);
- pr_info("PERCPU: Allocating %zd bytes of per cpu data\n", size);
+ /*
+ * Allocate percpu area. If PSE is supported, try to make use
+ * of large page mappings. Please read comments on top of
+ * each allocator for details.
+ */
+ ret = setup_pcpu_remap(static_size);
+ if (ret < 0)
+ ret = setup_pcpu_embed(static_size);
+ if (ret < 0)
+ ret = setup_pcpu_4k(static_size);
+ if (ret < 0)
+ panic("cannot allocate static percpu area (%zu bytes, err=%zd)",
+ static_size, ret);
- for_each_possible_cpu(cpu) {
-#ifndef CONFIG_NEED_MULTIPLE_NODES
- ptr = alloc_bootmem_pages(size);
-#else
- int node = early_cpu_to_node(cpu);
- if (!node_online(node) || !NODE_DATA(node)) {
- ptr = alloc_bootmem_pages(size);
- pr_info("cpu %d has no node %d or node-local memory\n",
- cpu, node);
- pr_debug("per cpu data for cpu%d at %016lx\n",
- cpu, __pa(ptr));
- } else {
- ptr = alloc_bootmem_pages_node(NODE_DATA(node), size);
- pr_debug("per cpu data for cpu%d on node%d at %016lx\n",
- cpu, node, __pa(ptr));
- }
-#endif
+ pcpu_unit_size = ret;
- memcpy(ptr, __per_cpu_load, __per_cpu_end - __per_cpu_start);
- per_cpu_offset(cpu) = ptr - __per_cpu_start;
+ /* alrighty, percpu areas up and running */
+ delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start;
+ for_each_possible_cpu(cpu) {
+ per_cpu_offset(cpu) = delta + cpu * pcpu_unit_size;
per_cpu(this_cpu_off, cpu) = per_cpu_offset(cpu);
per_cpu(cpu_number, cpu) = cpu;
setup_percpu_segment(cpu);
*/
if (cpu == boot_cpu_id)
switch_to_new_gdt(cpu);
-
- DBG("PERCPU: cpu %4d %p\n", cpu, ptr);
}
/* indicate the early static arrays will soon be gone */
/*
* Set up a signal frame.
*/
-#ifdef CONFIG_X86_32
-static const struct {
- u16 poplmovl;
- u32 val;
- u16 int80;
-} __attribute__((packed)) retcode = {
- 0xb858, /* popl %eax; movl $..., %eax */
- __NR_sigreturn,
- 0x80cd, /* int $0x80 */
-};
-
-static const struct {
- u8 movl;
- u32 val;
- u16 int80;
- u8 pad;
-} __attribute__((packed)) rt_retcode = {
- 0xb8, /* movl $..., %eax */
- __NR_rt_sigreturn,
- 0x80cd, /* int $0x80 */
- 0
-};
/*
* Determine which stack to use..
*/
+static unsigned long align_sigframe(unsigned long sp)
+{
+#ifdef CONFIG_X86_32
+ /*
+ * Align the stack pointer according to the i386 ABI,
+ * i.e. so that on function entry ((sp + 4) & 15) == 0.
+ */
+ sp = ((sp + 4) & -16ul) - 4;
+#else /* !CONFIG_X86_32 */
+ sp = round_down(sp, 16) - 8;
+#endif
+ return sp;
+}
+
static inline void __user *
get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
- void **fpstate)
+ void __user **fpstate)
{
- unsigned long sp;
-
/* Default to using normal stack */
- sp = regs->sp;
+ unsigned long sp = regs->sp;
+
+#ifdef CONFIG_X86_64
+ /* redzone */
+ sp -= 128;
+#endif /* CONFIG_X86_64 */
/*
* If we are on the alternate signal stack and would overflow it, don't.
if (sas_ss_flags(sp) == 0)
sp = current->sas_ss_sp + current->sas_ss_size;
} else {
+#ifdef CONFIG_X86_32
/* This is the legacy signal stack switching. */
if ((regs->ss & 0xffff) != __USER_DS &&
!(ka->sa.sa_flags & SA_RESTORER) &&
ka->sa.sa_restorer)
sp = (unsigned long) ka->sa.sa_restorer;
+#endif /* CONFIG_X86_32 */
}
if (used_math()) {
- sp = sp - sig_xstate_size;
- *fpstate = (struct _fpstate *) sp;
+ sp -= sig_xstate_size;
+#ifdef CONFIG_X86_64
+ sp = round_down(sp, 64);
+#endif /* CONFIG_X86_64 */
+ *fpstate = (void __user *)sp;
+
if (save_i387_xstate(*fpstate) < 0)
return (void __user *)-1L;
}
- sp -= frame_size;
- /*
- * Align the stack pointer according to the i386 ABI,
- * i.e. so that on function entry ((sp + 4) & 15) == 0.
- */
- sp = ((sp + 4) & -16ul) - 4;
-
- return (void __user *) sp;
+ return (void __user *)align_sigframe(sp - frame_size);
}
+#ifdef CONFIG_X86_32
+static const struct {
+ u16 poplmovl;
+ u32 val;
+ u16 int80;
+} __attribute__((packed)) retcode = {
+ 0xb858, /* popl %eax; movl $..., %eax */
+ __NR_sigreturn,
+ 0x80cd, /* int $0x80 */
+};
+
+static const struct {
+ u8 movl;
+ u32 val;
+ u16 int80;
+ u8 pad;
+} __attribute__((packed)) rt_retcode = {
+ 0xb8, /* movl $..., %eax */
+ __NR_rt_sigreturn,
+ 0x80cd, /* int $0x80 */
+ 0
+};
+
static int
__setup_frame(int sig, struct k_sigaction *ka, sigset_t *set,
struct pt_regs *regs)
return 0;
}
#else /* !CONFIG_X86_32 */
-/*
- * Determine which stack to use..
- */
-static void __user *
-get_stack(struct k_sigaction *ka, unsigned long sp, unsigned long size)
-{
- /* Default to using normal stack - redzone*/
- sp -= 128;
-
- /* This is the X/Open sanctioned signal stack switching. */
- if (ka->sa.sa_flags & SA_ONSTACK) {
- if (sas_ss_flags(sp) == 0)
- sp = current->sas_ss_sp + current->sas_ss_size;
- }
-
- return (void __user *)round_down(sp - size, 64);
-}
-
static int __setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs *regs)
{
int err = 0;
struct task_struct *me = current;
- if (used_math()) {
- fp = get_stack(ka, regs->sp, sig_xstate_size);
- frame = (void __user *)round_down(
- (unsigned long)fp - sizeof(struct rt_sigframe), 16) - 8;
-
- if (save_i387_xstate(fp) < 0)
- return -EFAULT;
- } else
- frame = get_stack(ka, regs->sp, sizeof(struct rt_sigframe)) - 8;
+ frame = get_sigframe(ka, regs, sizeof(struct rt_sigframe), &fp);
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
return -EFAULT;
DEFINE_PER_CPU_SHARED_ALIGNED(struct cpuinfo_x86, cpu_info);
EXPORT_PER_CPU_SYMBOL(cpu_info);
-static atomic_t init_deasserted;
-
-
-/* Set if we find a B stepping CPU */
-static int __cpuinitdata smp_b_stepping;
+atomic_t init_deasserted;
#if defined(CONFIG_NUMA) && defined(CONFIG_X86_32)
cpumask_set_cpu(cpuid, cpu_callin_mask);
}
-static int __cpuinitdata unsafe_smp;
-
/*
* Activate a secondary processor.
*/
cpu_idle();
}
-static void __cpuinit smp_apply_quirks(struct cpuinfo_x86 *c)
-{
- /*
- * Mask B, Pentium, but not Pentium MMX
- */
- if (c->x86_vendor == X86_VENDOR_INTEL &&
- c->x86 == 5 &&
- c->x86_mask >= 1 && c->x86_mask <= 4 &&
- c->x86_model <= 3)
- /*
- * Remember we have B step Pentia with bugs
- */
- smp_b_stepping = 1;
-
- /*
- * Certain Athlons might work (for various values of 'work') in SMP
- * but they are not certified as MP capable.
- */
- if ((c->x86_vendor == X86_VENDOR_AMD) && (c->x86 == 6)) {
-
- if (num_possible_cpus() == 1)
- goto valid_k7;
-
- /* Athlon 660/661 is valid. */
- if ((c->x86_model == 6) && ((c->x86_mask == 0) ||
- (c->x86_mask == 1)))
- goto valid_k7;
-
- /* Duron 670 is valid */
- if ((c->x86_model == 7) && (c->x86_mask == 0))
- goto valid_k7;
-
- /*
- * Athlon 662, Duron 671, and Athlon >model 7 have capability
- * bit. It's worth noting that the A5 stepping (662) of some
- * Athlon XP's have the MP bit set.
- * See http://www.heise.de/newsticker/data/jow-18.10.01-000 for
- * more.
- */
- if (((c->x86_model == 6) && (c->x86_mask >= 2)) ||
- ((c->x86_model == 7) && (c->x86_mask >= 1)) ||
- (c->x86_model > 7))
- if (cpu_has_mp)
- goto valid_k7;
-
- /* If we get here, not a certified SMP capable AMD system. */
- unsafe_smp = 1;
- }
-
-valid_k7:
- ;
-}
-
-static void __cpuinit smp_checks(void)
-{
- if (smp_b_stepping)
- printk(KERN_WARNING "WARNING: SMP operation may be unreliable"
- "with B stepping processors.\n");
-
- /*
- * Don't taint if we are running SMP kernel on a single non-MP
- * approved Athlon
- */
- if (unsafe_smp && num_online_cpus() > 1) {
- printk(KERN_INFO "WARNING: This combination of AMD"
- "processors is not suitable for SMP.\n");
- add_taint(TAINT_UNSAFE_SMP);
- }
-}
-
/*
* The bootstrap kernel entry code has set these up. Save them for
* a given CPU
c->cpu_index = id;
if (id != 0)
identify_secondary_cpu(c);
- smp_apply_quirks(c);
}
unsigned long send_status, accept_status = 0;
int maxlvt, num_starts, j;
- if (get_uv_system_type() == UV_NON_UNIQUE_APIC) {
- send_status = uv_wakeup_secondary(phys_apicid, start_eip);
- atomic_set(&init_deasserted, 1);
- return send_status;
- }
-
maxlvt = lapic_get_maxlvt();
/*
/*
* NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
* (ie clustered apic addressing mode), this is a LOGICAL apic ID.
- * Returns zero if CPU booted OK, else error code from ->wakeup_cpu.
+ * Returns zero if CPU booted OK, else error code from
+ * ->wakeup_secondary_cpu.
*/
static int __cpuinit do_boot_cpu(int apicid, int cpu)
{
}
/*
- * Starting actual IPI sequence...
+ * Kick the secondary CPU. Use the method in the APIC driver
+ * if it's defined - or use an INIT boot APIC message otherwise:
*/
- boot_error = apic->wakeup_cpu(apicid, start_ip);
+ if (apic->wakeup_secondary_cpu)
+ boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
+ else
+ boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
if (!boot_error) {
/*
pr_debug("Boot done.\n");
impress_friends();
- smp_checks();
#ifdef CONFIG_X86_IO_APIC
setup_ioapic_dest();
#endif
int locals = 0;
struct bau_desc *bau_desc;
- WARN_ON(!in_atomic());
-
cpumask_andnot(flush_mask, cpumask, cpumask_of(cpu));
uv_cpu = uv_blade_processor_id();
if (!user_mode_vm(regs))
die(str, regs, err);
}
-
-/*
- * Perform the lazy TSS's I/O bitmap copy. If the TSS has an
- * invalid offset set (the LAZY one) and the faulting thread has
- * a valid I/O bitmap pointer, we copy the I/O bitmap in the TSS,
- * we set the offset field correctly and return 1.
- */
-static int lazy_iobitmap_copy(void)
-{
- struct thread_struct *thread;
- struct tss_struct *tss;
- int cpu;
-
- cpu = get_cpu();
- tss = &per_cpu(init_tss, cpu);
- thread = ¤t->thread;
-
- if (tss->x86_tss.io_bitmap_base == INVALID_IO_BITMAP_OFFSET_LAZY &&
- thread->io_bitmap_ptr) {
- memcpy(tss->io_bitmap, thread->io_bitmap_ptr,
- thread->io_bitmap_max);
- /*
- * If the previously set map was extending to higher ports
- * than the current one, pad extra space with 0xff (no access).
- */
- if (thread->io_bitmap_max < tss->io_bitmap_max) {
- memset((char *) tss->io_bitmap +
- thread->io_bitmap_max, 0xff,
- tss->io_bitmap_max - thread->io_bitmap_max);
- }
- tss->io_bitmap_max = thread->io_bitmap_max;
- tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
- tss->io_bitmap_owner = thread;
- put_cpu();
-
- return 1;
- }
- put_cpu();
-
- return 0;
-}
#endif
static void __kprobes
conditional_sti(regs);
#ifdef CONFIG_X86_32
- if (lazy_iobitmap_copy()) {
- /* restart the faulting instruction */
- return;
- }
-
if (regs->flags & X86_VM_MASK)
goto gp_in_vm86;
#endif
--- /dev/null
+/*
+ * SGI RTC clock/timer routines.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Copyright (c) 2009 Silicon Graphics, Inc. All Rights Reserved.
+ * Copyright (c) Dimitri Sivanich
+ */
+#include <linux/clockchips.h>
+
+#include <asm/uv/uv_mmrs.h>
+#include <asm/uv/uv_hub.h>
+#include <asm/uv/bios.h>
+#include <asm/uv/uv.h>
+#include <asm/apic.h>
+#include <asm/cpu.h>
+
+#define RTC_NAME "sgi_rtc"
+
+static cycle_t uv_read_rtc(void);
+static int uv_rtc_next_event(unsigned long, struct clock_event_device *);
+static void uv_rtc_timer_setup(enum clock_event_mode,
+ struct clock_event_device *);
+
+static struct clocksource clocksource_uv = {
+ .name = RTC_NAME,
+ .rating = 400,
+ .read = uv_read_rtc,
+ .mask = (cycle_t)UVH_RTC_REAL_TIME_CLOCK_MASK,
+ .shift = 10,
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+
+static struct clock_event_device clock_event_device_uv = {
+ .name = RTC_NAME,
+ .features = CLOCK_EVT_FEAT_ONESHOT,
+ .shift = 20,
+ .rating = 400,
+ .irq = -1,
+ .set_next_event = uv_rtc_next_event,
+ .set_mode = uv_rtc_timer_setup,
+ .event_handler = NULL,
+};
+
+static DEFINE_PER_CPU(struct clock_event_device, cpu_ced);
+
+/* There is one of these allocated per node */
+struct uv_rtc_timer_head {
+ spinlock_t lock;
+ /* next cpu waiting for timer, local node relative: */
+ int next_cpu;
+ /* number of cpus on this node: */
+ int ncpus;
+ struct {
+ int lcpu; /* systemwide logical cpu number */
+ u64 expires; /* next timer expiration for this cpu */
+ } cpu[1];
+};
+
+/*
+ * Access to uv_rtc_timer_head via blade id.
+ */
+static struct uv_rtc_timer_head **blade_info __read_mostly;
+
+static int uv_rtc_enable;
+
+/*
+ * Hardware interface routines
+ */
+
+/* Send IPIs to another node */
+static void uv_rtc_send_IPI(int cpu)
+{
+ unsigned long apicid, val;
+ int pnode;
+
+ apicid = cpu_physical_id(cpu);
+ pnode = uv_apicid_to_pnode(apicid);
+ val = (1UL << UVH_IPI_INT_SEND_SHFT) |
+ (apicid << UVH_IPI_INT_APIC_ID_SHFT) |
+ (GENERIC_INTERRUPT_VECTOR << UVH_IPI_INT_VECTOR_SHFT);
+
+ uv_write_global_mmr64(pnode, UVH_IPI_INT, val);
+}
+
+/* Check for an RTC interrupt pending */
+static int uv_intr_pending(int pnode)
+{
+ return uv_read_global_mmr64(pnode, UVH_EVENT_OCCURRED0) &
+ UVH_EVENT_OCCURRED0_RTC1_MASK;
+}
+
+/* Setup interrupt and return non-zero if early expiration occurred. */
+static int uv_setup_intr(int cpu, u64 expires)
+{
+ u64 val;
+ int pnode = uv_cpu_to_pnode(cpu);
+
+ uv_write_global_mmr64(pnode, UVH_RTC1_INT_CONFIG,
+ UVH_RTC1_INT_CONFIG_M_MASK);
+ uv_write_global_mmr64(pnode, UVH_INT_CMPB, -1L);
+
+ uv_write_global_mmr64(pnode, UVH_EVENT_OCCURRED0_ALIAS,
+ UVH_EVENT_OCCURRED0_RTC1_MASK);
+
+ val = (GENERIC_INTERRUPT_VECTOR << UVH_RTC1_INT_CONFIG_VECTOR_SHFT) |
+ ((u64)cpu_physical_id(cpu) << UVH_RTC1_INT_CONFIG_APIC_ID_SHFT);
+
+ /* Set configuration */
+ uv_write_global_mmr64(pnode, UVH_RTC1_INT_CONFIG, val);
+ /* Initialize comparator value */
+ uv_write_global_mmr64(pnode, UVH_INT_CMPB, expires);
+
+ return (expires < uv_read_rtc() && !uv_intr_pending(pnode));
+}
+
+/*
+ * Per-cpu timer tracking routines
+ */
+
+static __init void uv_rtc_deallocate_timers(void)
+{
+ int bid;
+
+ for_each_possible_blade(bid) {
+ kfree(blade_info[bid]);
+ }
+ kfree(blade_info);
+}
+
+/* Allocate per-node list of cpu timer expiration times. */
+static __init int uv_rtc_allocate_timers(void)
+{
+ int cpu;
+
+ blade_info = kmalloc(uv_possible_blades * sizeof(void *), GFP_KERNEL);
+ if (!blade_info)
+ return -ENOMEM;
+ memset(blade_info, 0, uv_possible_blades * sizeof(void *));
+
+ for_each_present_cpu(cpu) {
+ int nid = cpu_to_node(cpu);
+ int bid = uv_cpu_to_blade_id(cpu);
+ int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id;
+ struct uv_rtc_timer_head *head = blade_info[bid];
+
+ if (!head) {
+ head = kmalloc_node(sizeof(struct uv_rtc_timer_head) +
+ (uv_blade_nr_possible_cpus(bid) *
+ 2 * sizeof(u64)),
+ GFP_KERNEL, nid);
+ if (!head) {
+ uv_rtc_deallocate_timers();
+ return -ENOMEM;
+ }
+ spin_lock_init(&head->lock);
+ head->ncpus = uv_blade_nr_possible_cpus(bid);
+ head->next_cpu = -1;
+ blade_info[bid] = head;
+ }
+
+ head->cpu[bcpu].lcpu = cpu;
+ head->cpu[bcpu].expires = ULLONG_MAX;
+ }
+
+ return 0;
+}
+
+/* Find and set the next expiring timer. */
+static void uv_rtc_find_next_timer(struct uv_rtc_timer_head *head, int pnode)
+{
+ u64 lowest = ULLONG_MAX;
+ int c, bcpu = -1;
+
+ head->next_cpu = -1;
+ for (c = 0; c < head->ncpus; c++) {
+ u64 exp = head->cpu[c].expires;
+ if (exp < lowest) {
+ bcpu = c;
+ lowest = exp;
+ }
+ }
+ if (bcpu >= 0) {
+ head->next_cpu = bcpu;
+ c = head->cpu[bcpu].lcpu;
+ if (uv_setup_intr(c, lowest))
+ /* If we didn't set it up in time, trigger */
+ uv_rtc_send_IPI(c);
+ } else {
+ uv_write_global_mmr64(pnode, UVH_RTC1_INT_CONFIG,
+ UVH_RTC1_INT_CONFIG_M_MASK);
+ }
+}
+
+/*
+ * Set expiration time for current cpu.
+ *
+ * Returns 1 if we missed the expiration time.
+ */
+static int uv_rtc_set_timer(int cpu, u64 expires)
+{
+ int pnode = uv_cpu_to_pnode(cpu);
+ int bid = uv_cpu_to_blade_id(cpu);
+ struct uv_rtc_timer_head *head = blade_info[bid];
+ int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id;
+ u64 *t = &head->cpu[bcpu].expires;
+ unsigned long flags;
+ int next_cpu;
+
+ spin_lock_irqsave(&head->lock, flags);
+
+ next_cpu = head->next_cpu;
+ *t = expires;
+ /* Will this one be next to go off? */
+ if (next_cpu < 0 || bcpu == next_cpu ||
+ expires < head->cpu[next_cpu].expires) {
+ head->next_cpu = bcpu;
+ if (uv_setup_intr(cpu, expires)) {
+ *t = ULLONG_MAX;
+ uv_rtc_find_next_timer(head, pnode);
+ spin_unlock_irqrestore(&head->lock, flags);
+ return 1;
+ }
+ }
+
+ spin_unlock_irqrestore(&head->lock, flags);
+ return 0;
+}
+
+/*
+ * Unset expiration time for current cpu.
+ *
+ * Returns 1 if this timer was pending.
+ */
+static int uv_rtc_unset_timer(int cpu)
+{
+ int pnode = uv_cpu_to_pnode(cpu);
+ int bid = uv_cpu_to_blade_id(cpu);
+ struct uv_rtc_timer_head *head = blade_info[bid];
+ int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id;
+ u64 *t = &head->cpu[bcpu].expires;
+ unsigned long flags;
+ int rc = 0;
+
+ spin_lock_irqsave(&head->lock, flags);
+
+ if (head->next_cpu == bcpu && uv_read_rtc() >= *t)
+ rc = 1;
+
+ *t = ULLONG_MAX;
+
+ /* Was the hardware setup for this timer? */
+ if (head->next_cpu == bcpu)
+ uv_rtc_find_next_timer(head, pnode);
+
+ spin_unlock_irqrestore(&head->lock, flags);
+
+ return rc;
+}
+
+
+/*
+ * Kernel interface routines.
+ */
+
+/*
+ * Read the RTC.
+ */
+static cycle_t uv_read_rtc(void)
+{
+ return (cycle_t)uv_read_local_mmr(UVH_RTC);
+}
+
+/*
+ * Program the next event, relative to now
+ */
+static int uv_rtc_next_event(unsigned long delta,
+ struct clock_event_device *ced)
+{
+ int ced_cpu = cpumask_first(ced->cpumask);
+
+ return uv_rtc_set_timer(ced_cpu, delta + uv_read_rtc());
+}
+
+/*
+ * Setup the RTC timer in oneshot mode
+ */
+static void uv_rtc_timer_setup(enum clock_event_mode mode,
+ struct clock_event_device *evt)
+{
+ int ced_cpu = cpumask_first(evt->cpumask);
+
+ switch (mode) {
+ case CLOCK_EVT_MODE_PERIODIC:
+ case CLOCK_EVT_MODE_ONESHOT:
+ case CLOCK_EVT_MODE_RESUME:
+ /* Nothing to do here yet */
+ break;
+ case CLOCK_EVT_MODE_UNUSED:
+ case CLOCK_EVT_MODE_SHUTDOWN:
+ uv_rtc_unset_timer(ced_cpu);
+ break;
+ }
+}
+
+static void uv_rtc_interrupt(void)
+{
+ struct clock_event_device *ced = &__get_cpu_var(cpu_ced);
+ int cpu = smp_processor_id();
+
+ if (!ced || !ced->event_handler)
+ return;
+
+ if (uv_rtc_unset_timer(cpu) != 1)
+ return;
+
+ ced->event_handler(ced);
+}
+
+static int __init uv_enable_rtc(char *str)
+{
+ uv_rtc_enable = 1;
+
+ return 1;
+}
+__setup("uvrtc", uv_enable_rtc);
+
+static __init void uv_rtc_register_clockevents(struct work_struct *dummy)
+{
+ struct clock_event_device *ced = &__get_cpu_var(cpu_ced);
+
+ *ced = clock_event_device_uv;
+ ced->cpumask = cpumask_of(smp_processor_id());
+ clockevents_register_device(ced);
+}
+
+static __init int uv_rtc_setup_clock(void)
+{
+ int rc;
+
+ if (!uv_rtc_enable || !is_uv_system() || generic_interrupt_extension)
+ return -ENODEV;
+
+ generic_interrupt_extension = uv_rtc_interrupt;
+
+ clocksource_uv.mult = clocksource_hz2mult(sn_rtc_cycles_per_second,
+ clocksource_uv.shift);
+
+ rc = clocksource_register(&clocksource_uv);
+ if (rc) {
+ generic_interrupt_extension = NULL;
+ return rc;
+ }
+
+ /* Setup and register clockevents */
+ rc = uv_rtc_allocate_timers();
+ if (rc) {
+ clocksource_unregister(&clocksource_uv);
+ generic_interrupt_extension = NULL;
+ return rc;
+ }
+
+ clock_event_device_uv.mult = div_sc(sn_rtc_cycles_per_second,
+ NSEC_PER_SEC, clock_event_device_uv.shift);
+
+ clock_event_device_uv.min_delta_ns = NSEC_PER_SEC /
+ sn_rtc_cycles_per_second;
+
+ clock_event_device_uv.max_delta_ns = clocksource_uv.mask *
+ (NSEC_PER_SEC / sn_rtc_cycles_per_second);
+
+ rc = schedule_on_each_cpu(uv_rtc_register_clockevents);
+ if (rc) {
+ clocksource_unregister(&clocksource_uv);
+ generic_interrupt_extension = NULL;
+ uv_rtc_deallocate_timers();
+ }
+
+ return rc;
+}
+arch_initcall(uv_rtc_setup_clock);
static irqreturn_t piix4_master_intr(int irq, void *dev_id)
{
int realirq;
- irq_desc_t *desc;
+ struct irq_desc *desc;
unsigned long flags;
spin_lock_irqsave(&i8259A_lock, flags);
ASSERT((per_cpu__irq_stack_union == 0),
"irq_stack_union is not at start of per-cpu area");
#endif
+
+#ifdef CONFIG_KEXEC
+#include <asm/kexec.h>
+
+ASSERT(kexec_control_code_size <= KEXEC_CONTROL_CODE_MAX_SIZE,
+ "kexec control code size is too big")
+#endif
#include <asm/paravirt.h>
#include <asm/setup.h>
-#if defined CONFIG_PCI && defined CONFIG_PARAVIRT
+#ifdef CONFIG_PARAVIRT
/*
* Interrupt control on vSMPowered systems:
* ~AC is a shadow of IF. If IF is 'on' AC should be 'off'
}
#endif
-#ifdef CONFIG_PCI
static int is_vsmp = -1;
static void __init detect_vsmp_box(void)
return 0;
}
}
-#else
-static void __init detect_vsmp_box(void)
-{
-}
-int is_vsmp_box(void)
-{
- return 0;
-}
-#endif
void __init vsmp_init(void)
{
* flush_tlb_user() for both user and kernel mappings unless
* the Page Global Enable (PGE) feature bit is set. */
*dx |= 0x00002000;
+ /* We also lie, and say we're family id 5. 6 or greater
+ * leads to a rdmsr in early_init_intel which we can't handle.
+ * Family ID is returned as bits 8-12 in ax. */
+ *ax &= 0xFFFFF0FF;
+ *ax |= 0x00000500;
break;
case 0x80000000:
/* Futureproof this a little: if they ask how much extended
/* Some systems map "vectors" to interrupts weirdly. Lguest has
* a straightforward 1 to 1 mapping, so force that here. */
__get_cpu_var(vector_irq)[vector] = i;
- if (vector != SYSCALL_VECTOR) {
- set_intr_gate(vector,
- interrupt[vector-FIRST_EXTERNAL_VECTOR]);
- set_irq_chip_and_handler_name(i, &lguest_irq_controller,
- handle_level_irq,
- "level");
- }
+ if (vector != SYSCALL_VECTOR)
+ set_intr_gate(vector, interrupt[i]);
}
/* This call is required to set up for 4k stacks, where we have
* separate stacks for hard and soft interrupts. */
irq_ctx_init(smp_processor_id());
}
+void lguest_setup_irq(unsigned int irq)
+{
+ irq_to_desc_alloc_cpu(irq, 0);
+ set_irq_chip_and_handler_name(irq, &lguest_irq_controller,
+ handle_level_irq, "level");
+}
+
/*
* Time.
*
}
/* Needs to be externally visible */
-void finit(void)
+void finit_task(struct task_struct *tsk)
{
- control_word = 0x037f;
- partial_status = 0;
- top = 0; /* We don't keep top in the status word internally. */
- fpu_tag_word = 0xffff;
+ struct i387_soft_struct *soft = &tsk->thread.xstate->soft;
+ struct address *oaddr, *iaddr;
+ soft->cwd = 0x037f;
+ soft->swd = 0;
+ soft->ftop = 0; /* We don't keep top in the status word internally. */
+ soft->twd = 0xffff;
/* The behaviour is different from that detailed in
Section 15.1.6 of the Intel manual */
- operand_address.offset = 0;
- operand_address.selector = 0;
- instruction_address.offset = 0;
- instruction_address.selector = 0;
- instruction_address.opcode = 0;
- no_ip_update = 1;
+ oaddr = (struct address *)&soft->foo;
+ oaddr->offset = 0;
+ oaddr->selector = 0;
+ iaddr = (struct address *)&soft->fip;
+ iaddr->offset = 0;
+ iaddr->selector = 0;
+ iaddr->opcode = 0;
+ soft->no_update = 1;
+}
+
+void finit(void)
+{
+ finit_task(current);
}
/*
-obj-y := init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
+obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
pat.o pgtable.o gup.o
obj-$(CONFIG_SMP) += tlb.o
#include <linux/highmem.h>
#include <linux/module.h>
+#include <linux/swap.h> /* for totalram_pages */
void *kmap(struct page *page)
{
EXPORT_SYMBOL(kunmap);
EXPORT_SYMBOL(kmap_atomic);
EXPORT_SYMBOL(kunmap_atomic);
+
+void __init set_highmem_pages_init(void)
+{
+ struct zone *zone;
+ int nid;
+
+ for_each_zone(zone) {
+ unsigned long zone_start_pfn, zone_end_pfn;
+
+ if (!is_highmem(zone))
+ continue;
+
+ zone_start_pfn = zone->zone_start_pfn;
+ zone_end_pfn = zone_start_pfn + zone->spanned_pages;
+
+ nid = zone_to_nid(zone);
+ printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n",
+ zone->name, nid, zone_start_pfn, zone_end_pfn);
+
+ add_highpages_with_active_regions(nid, zone_start_pfn,
+ zone_end_pfn);
+ }
+ totalram_pages += totalhigh_pages;
+}
--- /dev/null
+#include <linux/ioport.h>
+#include <linux/swap.h>
+
+#include <asm/cacheflush.h>
+#include <asm/e820.h>
+#include <asm/init.h>
+#include <asm/page.h>
+#include <asm/page_types.h>
+#include <asm/sections.h>
+#include <asm/system.h>
+#include <asm/tlbflush.h>
+
+unsigned long __initdata e820_table_start;
+unsigned long __meminitdata e820_table_end;
+unsigned long __meminitdata e820_table_top;
+
+int after_bootmem;
+
+int direct_gbpages
+#ifdef CONFIG_DIRECT_GBPAGES
+ = 1
+#endif
+;
+
+static void __init find_early_table_space(unsigned long end, int use_pse,
+ int use_gbpages)
+{
+ unsigned long puds, pmds, ptes, tables, start;
+
+ puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
+ tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
+
+ if (use_gbpages) {
+ unsigned long extra;
+
+ extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
+ pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT;
+ } else
+ pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
+
+ tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
+
+ if (use_pse) {
+ unsigned long extra;
+
+ extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
+#ifdef CONFIG_X86_32
+ extra += PMD_SIZE;
+#endif
+ ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ } else
+ ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
+
+ tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
+
+#ifdef CONFIG_X86_32
+ /* for fixmap */
+ tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
+#endif
+
+ /*
+ * RED-PEN putting page tables only on node 0 could
+ * cause a hotspot and fill up ZONE_DMA. The page tables
+ * need roughly 0.5KB per GB.
+ */
+#ifdef CONFIG_X86_32
+ start = 0x7000;
+ e820_table_start = find_e820_area(start, max_pfn_mapped<<PAGE_SHIFT,
+ tables, PAGE_SIZE);
+#else /* CONFIG_X86_64 */
+ start = 0x8000;
+ e820_table_start = find_e820_area(start, end, tables, PAGE_SIZE);
+#endif
+ if (e820_table_start == -1UL)
+ panic("Cannot find space for the kernel page tables");
+
+ e820_table_start >>= PAGE_SHIFT;
+ e820_table_end = e820_table_start;
+ e820_table_top = e820_table_start + (tables >> PAGE_SHIFT);
+
+ printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n",
+ end, e820_table_start << PAGE_SHIFT, e820_table_top << PAGE_SHIFT);
+}
+
+struct map_range {
+ unsigned long start;
+ unsigned long end;
+ unsigned page_size_mask;
+};
+
+#ifdef CONFIG_X86_32
+#define NR_RANGE_MR 3
+#else /* CONFIG_X86_64 */
+#define NR_RANGE_MR 5
+#endif
+
+static int save_mr(struct map_range *mr, int nr_range,
+ unsigned long start_pfn, unsigned long end_pfn,
+ unsigned long page_size_mask)
+{
+ if (start_pfn < end_pfn) {
+ if (nr_range >= NR_RANGE_MR)
+ panic("run out of range for init_memory_mapping\n");
+ mr[nr_range].start = start_pfn<<PAGE_SHIFT;
+ mr[nr_range].end = end_pfn<<PAGE_SHIFT;
+ mr[nr_range].page_size_mask = page_size_mask;
+ nr_range++;
+ }
+
+ return nr_range;
+}
+
+#ifdef CONFIG_X86_64
+static void __init init_gbpages(void)
+{
+ if (direct_gbpages && cpu_has_gbpages)
+ printk(KERN_INFO "Using GB pages for direct mapping\n");
+ else
+ direct_gbpages = 0;
+}
+#else
+static inline void init_gbpages(void)
+{
+}
+#endif
+
+/*
+ * Setup the direct mapping of the physical memory at PAGE_OFFSET.
+ * This runs before bootmem is initialized and gets pages directly from
+ * the physical memory. To access them they are temporarily mapped.
+ */
+unsigned long __init_refok init_memory_mapping(unsigned long start,
+ unsigned long end)
+{
+ unsigned long page_size_mask = 0;
+ unsigned long start_pfn, end_pfn;
+ unsigned long ret = 0;
+ unsigned long pos;
+
+ struct map_range mr[NR_RANGE_MR];
+ int nr_range, i;
+ int use_pse, use_gbpages;
+
+ printk(KERN_INFO "init_memory_mapping: %016lx-%016lx\n", start, end);
+
+ if (!after_bootmem)
+ init_gbpages();
+
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ /*
+ * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
+ * This will simplify cpa(), which otherwise needs to support splitting
+ * large pages into small in interrupt context, etc.
+ */
+ use_pse = use_gbpages = 0;
+#else
+ use_pse = cpu_has_pse;
+ use_gbpages = direct_gbpages;
+#endif
+
+#ifdef CONFIG_X86_32
+#ifdef CONFIG_X86_PAE
+ set_nx();
+ if (nx_enabled)
+ printk(KERN_INFO "NX (Execute Disable) protection: active\n");
+#endif
+
+ /* Enable PSE if available */
+ if (cpu_has_pse)
+ set_in_cr4(X86_CR4_PSE);
+
+ /* Enable PGE if available */
+ if (cpu_has_pge) {
+ set_in_cr4(X86_CR4_PGE);
+ __supported_pte_mask |= _PAGE_GLOBAL;
+ }
+#endif
+
+ if (use_gbpages)
+ page_size_mask |= 1 << PG_LEVEL_1G;
+ if (use_pse)
+ page_size_mask |= 1 << PG_LEVEL_2M;
+
+ memset(mr, 0, sizeof(mr));
+ nr_range = 0;
+
+ /* head if not big page alignment ? */
+ start_pfn = start >> PAGE_SHIFT;
+ pos = start_pfn << PAGE_SHIFT;
+#ifdef CONFIG_X86_32
+ /*
+ * Don't use a large page for the first 2/4MB of memory
+ * because there are often fixed size MTRRs in there
+ * and overlapping MTRRs into large pages can cause
+ * slowdowns.
+ */
+ if (pos == 0)
+ end_pfn = 1<<(PMD_SHIFT - PAGE_SHIFT);
+ else
+ end_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT)
+ << (PMD_SHIFT - PAGE_SHIFT);
+#else /* CONFIG_X86_64 */
+ end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT)
+ << (PMD_SHIFT - PAGE_SHIFT);
+#endif
+ if (end_pfn > (end >> PAGE_SHIFT))
+ end_pfn = end >> PAGE_SHIFT;
+ if (start_pfn < end_pfn) {
+ nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
+ pos = end_pfn << PAGE_SHIFT;
+ }
+
+ /* big page (2M) range */
+ start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT)
+ << (PMD_SHIFT - PAGE_SHIFT);
+#ifdef CONFIG_X86_32
+ end_pfn = (end>>PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT);
+#else /* CONFIG_X86_64 */
+ end_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT)
+ << (PUD_SHIFT - PAGE_SHIFT);
+ if (end_pfn > ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT)))
+ end_pfn = ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT));
+#endif
+
+ if (start_pfn < end_pfn) {
+ nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
+ page_size_mask & (1<<PG_LEVEL_2M));
+ pos = end_pfn << PAGE_SHIFT;
+ }
+
+#ifdef CONFIG_X86_64
+ /* big page (1G) range */
+ start_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT)
+ << (PUD_SHIFT - PAGE_SHIFT);
+ end_pfn = (end >> PUD_SHIFT) << (PUD_SHIFT - PAGE_SHIFT);
+ if (start_pfn < end_pfn) {
+ nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
+ page_size_mask &
+ ((1<<PG_LEVEL_2M)|(1<<PG_LEVEL_1G)));
+ pos = end_pfn << PAGE_SHIFT;
+ }
+
+ /* tail is not big page (1G) alignment */
+ start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT)
+ << (PMD_SHIFT - PAGE_SHIFT);
+ end_pfn = (end >> PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT);
+ if (start_pfn < end_pfn) {
+ nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
+ page_size_mask & (1<<PG_LEVEL_2M));
+ pos = end_pfn << PAGE_SHIFT;
+ }
+#endif
+
+ /* tail is not big page (2M) alignment */
+ start_pfn = pos>>PAGE_SHIFT;
+ end_pfn = end>>PAGE_SHIFT;
+ nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
+
+ /* try to merge same page size and continuous */
+ for (i = 0; nr_range > 1 && i < nr_range - 1; i++) {
+ unsigned long old_start;
+ if (mr[i].end != mr[i+1].start ||
+ mr[i].page_size_mask != mr[i+1].page_size_mask)
+ continue;
+ /* move it */
+ old_start = mr[i].start;
+ memmove(&mr[i], &mr[i+1],
+ (nr_range - 1 - i) * sizeof(struct map_range));
+ mr[i--].start = old_start;
+ nr_range--;
+ }
+
+ for (i = 0; i < nr_range; i++)
+ printk(KERN_DEBUG " %010lx - %010lx page %s\n",
+ mr[i].start, mr[i].end,
+ (mr[i].page_size_mask & (1<<PG_LEVEL_1G))?"1G":(
+ (mr[i].page_size_mask & (1<<PG_LEVEL_2M))?"2M":"4k"));
+
+ /*
+ * Find space for the kernel direct mapping tables.
+ *
+ * Later we should allocate these tables in the local node of the
+ * memory mapped. Unfortunately this is done currently before the
+ * nodes are discovered.
+ */
+ if (!after_bootmem)
+ find_early_table_space(end, use_pse, use_gbpages);
+
+#ifdef CONFIG_X86_32
+ for (i = 0; i < nr_range; i++)
+ kernel_physical_mapping_init(mr[i].start, mr[i].end,
+ mr[i].page_size_mask);
+ ret = end;
+#else /* CONFIG_X86_64 */
+ for (i = 0; i < nr_range; i++)
+ ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
+ mr[i].page_size_mask);
+#endif
+
+#ifdef CONFIG_X86_32
+ early_ioremap_page_table_range_init();
+
+ load_cr3(swapper_pg_dir);
+#endif
+
+#ifdef CONFIG_X86_64
+ if (!after_bootmem)
+ mmu_cr4_features = read_cr4();
+#endif
+ __flush_tlb_all();
+
+ if (!after_bootmem && e820_table_end > e820_table_start)
+ reserve_early(e820_table_start << PAGE_SHIFT,
+ e820_table_end << PAGE_SHIFT, "PGTABLE");
+
+ if (!after_bootmem)
+ early_memtest(start, end);
+
+ return ret >> PAGE_SHIFT;
+}
+
+
+/*
+ * devmem_is_allowed() checks to see if /dev/mem access to a certain address
+ * is valid. The argument is a physical page number.
+ *
+ *
+ * On x86, access has to be given to the first megabyte of ram because that area
+ * contains bios code and data regions used by X and dosemu and similar apps.
+ * Access has to be given to non-kernel-ram areas as well, these contain the PCI
+ * mmio resources as well as potential bios/acpi data regions.
+ */
+int devmem_is_allowed(unsigned long pagenr)
+{
+ if (pagenr <= 256)
+ return 1;
+ if (iomem_is_exclusive(pagenr << PAGE_SHIFT))
+ return 0;
+ if (!page_is_ram(pagenr))
+ return 1;
+ return 0;
+}
+
+void free_init_pages(char *what, unsigned long begin, unsigned long end)
+{
+ unsigned long addr = begin;
+
+ if (addr >= end)
+ return;
+
+ /*
+ * If debugging page accesses then do not free this memory but
+ * mark them not present - any buggy init-section access will
+ * create a kernel page fault:
+ */
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
+ begin, PAGE_ALIGN(end));
+ set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
+#else
+ /*
+ * We just marked the kernel text read only above, now that
+ * we are going to free part of that, we need to make that
+ * writeable first.
+ */
+ set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
+
+ printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
+
+ for (; addr < end; addr += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(addr));
+ init_page_count(virt_to_page(addr));
+ memset((void *)(addr & ~(PAGE_SIZE-1)),
+ POISON_FREE_INITMEM, PAGE_SIZE);
+ free_page(addr);
+ totalram_pages++;
+ }
+#endif
+}
+
+void free_initmem(void)
+{
+ free_init_pages("unused kernel memory",
+ (unsigned long)(&__init_begin),
+ (unsigned long)(&__init_end));
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+ free_init_pages("initrd memory", start, end);
+}
+#endif
#include <asm/paravirt.h>
#include <asm/setup.h>
#include <asm/cacheflush.h>
-
-unsigned int __VMALLOC_RESERVE = 128 << 20;
+#include <asm/init.h>
unsigned long max_low_pfn_mapped;
unsigned long max_pfn_mapped;
static noinline int do_test_wp_bit(void);
-
-static unsigned long __initdata table_start;
-static unsigned long __meminitdata table_end;
-static unsigned long __meminitdata table_top;
-
-static int __initdata after_init_bootmem;
+bool __read_mostly __vmalloc_start_set = false;
static __init void *alloc_low_page(void)
{
- unsigned long pfn = table_end++;
+ unsigned long pfn = e820_table_end++;
void *adr;
- if (pfn >= table_top)
+ if (pfn >= e820_table_top)
panic("alloc_low_page: ran out of memory");
adr = __va(pfn * PAGE_SIZE);
#ifdef CONFIG_X86_PAE
if (!(pgd_val(*pgd) & _PAGE_PRESENT)) {
- if (after_init_bootmem)
+ if (after_bootmem)
pmd_table = (pmd_t *)alloc_bootmem_low_pages(PAGE_SIZE);
else
pmd_table = (pmd_t *)alloc_low_page();
if (!(pmd_val(*pmd) & _PAGE_PRESENT)) {
pte_t *page_table = NULL;
- if (after_init_bootmem) {
+ if (after_bootmem) {
#ifdef CONFIG_DEBUG_PAGEALLOC
page_table = (pte_t *) alloc_bootmem_pages(PAGE_SIZE);
#endif
return pte_offset_kernel(pmd, 0);
}
+pmd_t * __init populate_extra_pmd(unsigned long vaddr)
+{
+ int pgd_idx = pgd_index(vaddr);
+ int pmd_idx = pmd_index(vaddr);
+
+ return one_md_table_init(swapper_pg_dir + pgd_idx) + pmd_idx;
+}
+
+pte_t * __init populate_extra_pte(unsigned long vaddr)
+{
+ int pte_idx = pte_index(vaddr);
+ pmd_t *pmd;
+
+ pmd = populate_extra_pmd(vaddr);
+ return one_page_table_init(pmd) + pte_idx;
+}
+
static pte_t *__init page_table_kmap_check(pte_t *pte, pmd_t *pmd,
unsigned long vaddr, pte_t *lastpte)
{
if (pmd_idx_kmap_begin != pmd_idx_kmap_end
&& (vaddr >> PMD_SHIFT) >= pmd_idx_kmap_begin
&& (vaddr >> PMD_SHIFT) <= pmd_idx_kmap_end
- && ((__pa(pte) >> PAGE_SHIFT) < table_start
- || (__pa(pte) >> PAGE_SHIFT) >= table_end)) {
+ && ((__pa(pte) >> PAGE_SHIFT) < e820_table_start
+ || (__pa(pte) >> PAGE_SHIFT) >= e820_table_end)) {
pte_t *newpte;
int i;
- BUG_ON(after_init_bootmem);
+ BUG_ON(after_bootmem);
newpte = alloc_low_page();
for (i = 0; i < PTRS_PER_PTE; i++)
set_pte(newpte + i, pte[i]);
* of max_low_pfn pages, by creating page tables starting from address
* PAGE_OFFSET:
*/
-static void __init kernel_physical_mapping_init(pgd_t *pgd_base,
- unsigned long start_pfn,
- unsigned long end_pfn,
- int use_pse)
+unsigned long __init
+kernel_physical_mapping_init(unsigned long start,
+ unsigned long end,
+ unsigned long page_size_mask)
{
+ int use_pse = page_size_mask == (1<<PG_LEVEL_2M);
+ unsigned long start_pfn, end_pfn;
+ pgd_t *pgd_base = swapper_pg_dir;
int pgd_idx, pmd_idx, pte_ofs;
unsigned long pfn;
pgd_t *pgd;
unsigned pages_2m, pages_4k;
int mapping_iter;
+ start_pfn = start >> PAGE_SHIFT;
+ end_pfn = end >> PAGE_SHIFT;
+
/*
* First iteration will setup identity mapping using large/small pages
* based on use_pse, with other attributes same as set by
mapping_iter = 2;
goto repeat;
}
-}
-
-/*
- * devmem_is_allowed() checks to see if /dev/mem access to a certain address
- * is valid. The argument is a physical page number.
- *
- *
- * On x86, access has to be given to the first megabyte of ram because that area
- * contains bios code and data regions used by X and dosemu and similar apps.
- * Access has to be given to non-kernel-ram areas as well, these contain the PCI
- * mmio resources as well as potential bios/acpi data regions.
- */
-int devmem_is_allowed(unsigned long pagenr)
-{
- if (pagenr <= 256)
- return 1;
- if (iomem_is_exclusive(pagenr << PAGE_SHIFT))
- return 0;
- if (!page_is_ram(pagenr))
- return 1;
return 0;
}
work_with_active_regions(nid, add_highpages_work_fn, &data);
}
-#ifndef CONFIG_NUMA
-static void __init set_highmem_pages_init(void)
-{
- add_highpages_with_active_regions(0, highstart_pfn, highend_pfn);
-
- totalram_pages += totalhigh_pages;
-}
-#endif /* !CONFIG_NUMA */
-
#else
static inline void permanent_kmaps_init(pgd_t *pgd_base)
{
}
-static inline void set_highmem_pages_init(void)
-{
-}
#endif /* CONFIG_HIGHMEM */
void __init native_pagetable_setup_start(pgd_t *base)
* be partially populated, and so it avoids stomping on any existing
* mappings.
*/
-static void __init early_ioremap_page_table_range_init(pgd_t *pgd_base)
+void __init early_ioremap_page_table_range_init(void)
{
+ pgd_t *pgd_base = swapper_pg_dir;
unsigned long vaddr, end;
/*
}
early_param("noexec", noexec_setup);
-static void __init set_nx(void)
+void __init set_nx(void)
{
unsigned int v[4], l, h;
#ifdef CONFIG_FLATMEM
max_mapnr = num_physpages;
#endif
+ __vmalloc_start_set = true;
+
printk(KERN_NOTICE "%ldMB LOWMEM available.\n",
pages_to_mb(max_low_pfn));
free_area_init_nodes(max_zone_pfns);
}
+static unsigned long __init setup_node_bootmem(int nodeid,
+ unsigned long start_pfn,
+ unsigned long end_pfn,
+ unsigned long bootmap)
+{
+ unsigned long bootmap_size;
+
+ /* don't touch min_low_pfn */
+ bootmap_size = init_bootmem_node(NODE_DATA(nodeid),
+ bootmap >> PAGE_SHIFT,
+ start_pfn, end_pfn);
+ printk(KERN_INFO " node %d low ram: %08lx - %08lx\n",
+ nodeid, start_pfn<<PAGE_SHIFT, end_pfn<<PAGE_SHIFT);
+ printk(KERN_INFO " node %d bootmap %08lx - %08lx\n",
+ nodeid, bootmap, bootmap + bootmap_size);
+ free_bootmem_with_active_regions(nodeid, end_pfn);
+ early_res_to_bootmem(start_pfn<<PAGE_SHIFT, end_pfn<<PAGE_SHIFT);
+
+ return bootmap + bootmap_size;
+}
+
void __init setup_bootmem_allocator(void)
{
- int i;
+ int nodeid;
unsigned long bootmap_size, bootmap;
/*
* Initialize the boot-time allocator (with low memory only):
*/
bootmap_size = bootmem_bootmap_pages(max_low_pfn)<<PAGE_SHIFT;
- bootmap = find_e820_area(min_low_pfn<<PAGE_SHIFT,
- max_pfn_mapped<<PAGE_SHIFT, bootmap_size,
+ bootmap = find_e820_area(0, max_pfn_mapped<<PAGE_SHIFT, bootmap_size,
PAGE_SIZE);
if (bootmap == -1L)
panic("Cannot find bootmem map of size %ld\n", bootmap_size);
reserve_early(bootmap, bootmap + bootmap_size, "BOOTMAP");
- /* don't touch min_low_pfn */
- bootmap_size = init_bootmem_node(NODE_DATA(0), bootmap >> PAGE_SHIFT,
- min_low_pfn, max_low_pfn);
printk(KERN_INFO " mapped low ram: 0 - %08lx\n",
max_pfn_mapped<<PAGE_SHIFT);
- printk(KERN_INFO " low ram: %08lx - %08lx\n",
- min_low_pfn<<PAGE_SHIFT, max_low_pfn<<PAGE_SHIFT);
- printk(KERN_INFO " bootmap %08lx - %08lx\n",
- bootmap, bootmap + bootmap_size);
- for_each_online_node(i)
- free_bootmem_with_active_regions(i, max_low_pfn);
- early_res_to_bootmem(0, max_low_pfn<<PAGE_SHIFT);
-
- after_init_bootmem = 1;
-}
-
-static void __init find_early_table_space(unsigned long end, int use_pse)
-{
- unsigned long puds, pmds, ptes, tables, start;
-
- puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
- tables = PAGE_ALIGN(puds * sizeof(pud_t));
+ printk(KERN_INFO " low ram: 0 - %08lx\n", max_low_pfn<<PAGE_SHIFT);
- pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
- tables += PAGE_ALIGN(pmds * sizeof(pmd_t));
-
- if (use_pse) {
- unsigned long extra;
-
- extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
- extra += PMD_SIZE;
- ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
- } else
- ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
-
- tables += PAGE_ALIGN(ptes * sizeof(pte_t));
-
- /* for fixmap */
- tables += PAGE_ALIGN(__end_of_fixed_addresses * sizeof(pte_t));
-
- /*
- * RED-PEN putting page tables only on node 0 could
- * cause a hotspot and fill up ZONE_DMA. The page tables
- * need roughly 0.5KB per GB.
- */
- start = 0x7000;
- table_start = find_e820_area(start, max_pfn_mapped<<PAGE_SHIFT,
- tables, PAGE_SIZE);
- if (table_start == -1UL)
- panic("Cannot find space for the kernel page tables");
-
- table_start >>= PAGE_SHIFT;
- table_end = table_start;
- table_top = table_start + (tables>>PAGE_SHIFT);
-
- printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n",
- end, table_start << PAGE_SHIFT,
- (table_start << PAGE_SHIFT) + tables);
-}
+ for_each_online_node(nodeid) {
+ unsigned long start_pfn, end_pfn;
-unsigned long __init_refok init_memory_mapping(unsigned long start,
- unsigned long end)
-{
- pgd_t *pgd_base = swapper_pg_dir;
- unsigned long start_pfn, end_pfn;
- unsigned long big_page_start;
-#ifdef CONFIG_DEBUG_PAGEALLOC
- /*
- * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
- * This will simplify cpa(), which otherwise needs to support splitting
- * large pages into small in interrupt context, etc.
- */
- int use_pse = 0;
+#ifdef CONFIG_NEED_MULTIPLE_NODES
+ start_pfn = node_start_pfn[nodeid];
+ end_pfn = node_end_pfn[nodeid];
+ if (start_pfn > max_low_pfn)
+ continue;
+ if (end_pfn > max_low_pfn)
+ end_pfn = max_low_pfn;
#else
- int use_pse = cpu_has_pse;
+ start_pfn = 0;
+ end_pfn = max_low_pfn;
#endif
-
- /*
- * Find space for the kernel direct mapping tables.
- */
- if (!after_init_bootmem)
- find_early_table_space(end, use_pse);
-
-#ifdef CONFIG_X86_PAE
- set_nx();
- if (nx_enabled)
- printk(KERN_INFO "NX (Execute Disable) protection: active\n");
-#endif
-
- /* Enable PSE if available */
- if (cpu_has_pse)
- set_in_cr4(X86_CR4_PSE);
-
- /* Enable PGE if available */
- if (cpu_has_pge) {
- set_in_cr4(X86_CR4_PGE);
- __supported_pte_mask |= _PAGE_GLOBAL;
- }
-
- /*
- * Don't use a large page for the first 2/4MB of memory
- * because there are often fixed size MTRRs in there
- * and overlapping MTRRs into large pages can cause
- * slowdowns.
- */
- big_page_start = PMD_SIZE;
-
- if (start < big_page_start) {
- start_pfn = start >> PAGE_SHIFT;
- end_pfn = min(big_page_start>>PAGE_SHIFT, end>>PAGE_SHIFT);
- } else {
- /* head is not big page alignment ? */
- start_pfn = start >> PAGE_SHIFT;
- end_pfn = ((start + (PMD_SIZE - 1))>>PMD_SHIFT)
- << (PMD_SHIFT - PAGE_SHIFT);
- }
- if (start_pfn < end_pfn)
- kernel_physical_mapping_init(pgd_base, start_pfn, end_pfn, 0);
-
- /* big page range */
- start_pfn = ((start + (PMD_SIZE - 1))>>PMD_SHIFT)
- << (PMD_SHIFT - PAGE_SHIFT);
- if (start_pfn < (big_page_start >> PAGE_SHIFT))
- start_pfn = big_page_start >> PAGE_SHIFT;
- end_pfn = (end>>PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT);
- if (start_pfn < end_pfn)
- kernel_physical_mapping_init(pgd_base, start_pfn, end_pfn,
- use_pse);
-
- /* tail is not big page alignment ? */
- start_pfn = end_pfn;
- if (start_pfn > (big_page_start>>PAGE_SHIFT)) {
- end_pfn = end >> PAGE_SHIFT;
- if (start_pfn < end_pfn)
- kernel_physical_mapping_init(pgd_base, start_pfn,
- end_pfn, 0);
+ bootmap = setup_node_bootmem(nodeid, start_pfn, end_pfn,
+ bootmap);
}
- early_ioremap_page_table_range_init(pgd_base);
-
- load_cr3(swapper_pg_dir);
-
- __flush_tlb_all();
-
- if (!after_init_bootmem)
- reserve_early(table_start << PAGE_SHIFT,
- table_end << PAGE_SHIFT, "PGTABLE");
-
- if (!after_init_bootmem)
- early_memtest(start, end);
-
- return end >> PAGE_SHIFT;
+ after_bootmem = 1;
}
-
/*
* paging_init() sets up the page tables - note that the first 8MB are
* already mapped by head.S.
}
#endif
-void free_init_pages(char *what, unsigned long begin, unsigned long end)
-{
-#ifdef CONFIG_DEBUG_PAGEALLOC
- /*
- * If debugging page accesses then do not free this memory but
- * mark them not present - any buggy init-section access will
- * create a kernel page fault:
- */
- printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
- begin, PAGE_ALIGN(end));
- set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
-#else
- unsigned long addr;
-
- /*
- * We just marked the kernel text read only above, now that
- * we are going to free part of that, we need to make that
- * writeable first.
- */
- set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
-
- for (addr = begin; addr < end; addr += PAGE_SIZE) {
- ClearPageReserved(virt_to_page(addr));
- init_page_count(virt_to_page(addr));
- memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
- free_page(addr);
- totalram_pages++;
- }
- printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
-#endif
-}
-
-void free_initmem(void)
-{
- free_init_pages("unused kernel memory",
- (unsigned long)(&__init_begin),
- (unsigned long)(&__init_end));
-}
-
-#ifdef CONFIG_BLK_DEV_INITRD
-void free_initrd_mem(unsigned long start, unsigned long end)
-{
- free_init_pages("initrd memory", start, end);
-}
-#endif
-
int __init reserve_bootmem_generic(unsigned long phys, unsigned long len,
int flags)
{
#include <asm/kdebug.h>
#include <asm/numa.h>
#include <asm/cacheflush.h>
+#include <asm/init.h>
/*
* end_pfn only includes RAM, while max_pfn_mapped includes all e820 entries.
DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
-int direct_gbpages
-#ifdef CONFIG_DIRECT_GBPAGES
- = 1
-#endif
-;
-
static int __init parse_direct_gbpages_off(char *arg)
{
direct_gbpages = 0;
* around without checking the pgd every time.
*/
-int after_bootmem;
-
pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
EXPORT_SYMBOL_GPL(__supported_pte_mask);
-static int do_not_nx __cpuinitdata;
+static int disable_nx __cpuinitdata;
/*
* noexec=on|off
return -EINVAL;
if (!strncmp(str, "on", 2)) {
__supported_pte_mask |= _PAGE_NX;
- do_not_nx = 0;
+ disable_nx = 0;
} else if (!strncmp(str, "off", 3)) {
- do_not_nx = 1;
+ disable_nx = 1;
__supported_pte_mask &= ~_PAGE_NX;
}
return 0;
unsigned long efer;
rdmsrl(MSR_EFER, efer);
- if (!(efer & EFER_NX) || do_not_nx)
+ if (!(efer & EFER_NX) || disable_nx)
__supported_pte_mask &= ~_PAGE_NX;
}
return ptr;
}
-void
-set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte)
+static pud_t *fill_pud(pgd_t *pgd, unsigned long vaddr)
{
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
+ if (pgd_none(*pgd)) {
+ pud_t *pud = (pud_t *)spp_getpage();
+ pgd_populate(&init_mm, pgd, pud);
+ if (pud != pud_offset(pgd, 0))
+ printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n",
+ pud, pud_offset(pgd, 0));
+ }
+ return pud_offset(pgd, vaddr);
+}
- pud = pud_page + pud_index(vaddr);
+static pmd_t *fill_pmd(pud_t *pud, unsigned long vaddr)
+{
if (pud_none(*pud)) {
- pmd = (pmd_t *) spp_getpage();
+ pmd_t *pmd = (pmd_t *) spp_getpage();
pud_populate(&init_mm, pud, pmd);
- if (pmd != pmd_offset(pud, 0)) {
+ if (pmd != pmd_offset(pud, 0))
printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
- pmd, pmd_offset(pud, 0));
- return;
- }
+ pmd, pmd_offset(pud, 0));
}
- pmd = pmd_offset(pud, vaddr);
+ return pmd_offset(pud, vaddr);
+}
+
+static pte_t *fill_pte(pmd_t *pmd, unsigned long vaddr)
+{
if (pmd_none(*pmd)) {
- pte = (pte_t *) spp_getpage();
+ pte_t *pte = (pte_t *) spp_getpage();
pmd_populate_kernel(&init_mm, pmd, pte);
- if (pte != pte_offset_kernel(pmd, 0)) {
+ if (pte != pte_offset_kernel(pmd, 0))
printk(KERN_ERR "PAGETABLE BUG #02!\n");
- return;
- }
}
+ return pte_offset_kernel(pmd, vaddr);
+}
+
+void set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte)
+{
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pud = pud_page + pud_index(vaddr);
+ pmd = fill_pmd(pud, vaddr);
+ pte = fill_pte(pmd, vaddr);
- pte = pte_offset_kernel(pmd, vaddr);
set_pte(pte, new_pte);
/*
__flush_tlb_one(vaddr);
}
-void
-set_pte_vaddr(unsigned long vaddr, pte_t pteval)
+void set_pte_vaddr(unsigned long vaddr, pte_t pteval)
{
pgd_t *pgd;
pud_t *pud_page;
set_pte_vaddr_pud(pud_page, vaddr, pteval);
}
+pmd_t * __init populate_extra_pmd(unsigned long vaddr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+
+ pgd = pgd_offset_k(vaddr);
+ pud = fill_pud(pgd, vaddr);
+ return fill_pmd(pud, vaddr);
+}
+
+pte_t * __init populate_extra_pte(unsigned long vaddr)
+{
+ pmd_t *pmd;
+
+ pmd = populate_extra_pmd(vaddr);
+ return fill_pte(pmd, vaddr);
+}
+
/*
* Create large page table mappings for a range of physical addresses.
*/
}
}
-static unsigned long __initdata table_start;
-static unsigned long __meminitdata table_end;
-static unsigned long __meminitdata table_top;
-
static __ref void *alloc_low_page(unsigned long *phys)
{
- unsigned long pfn = table_end++;
+ unsigned long pfn = e820_table_end++;
void *adr;
if (after_bootmem) {
return adr;
}
- if (pfn >= table_top)
+ if (pfn >= e820_table_top)
panic("alloc_low_page: ran out of memory");
adr = early_memremap(pfn * PAGE_SIZE, PAGE_SIZE);
return phys_pud_init(pud, addr, end, page_size_mask);
}
-static void __init find_early_table_space(unsigned long end, int use_pse,
- int use_gbpages)
-{
- unsigned long puds, pmds, ptes, tables, start;
-
- puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
- tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
- if (use_gbpages) {
- unsigned long extra;
- extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
- pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT;
- } else
- pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
- tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
-
- if (use_pse) {
- unsigned long extra;
- extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
- ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
- } else
- ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
- tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
-
- /*
- * RED-PEN putting page tables only on node 0 could
- * cause a hotspot and fill up ZONE_DMA. The page tables
- * need roughly 0.5KB per GB.
- */
- start = 0x8000;
- table_start = find_e820_area(start, end, tables, PAGE_SIZE);
- if (table_start == -1UL)
- panic("Cannot find space for the kernel page tables");
-
- table_start >>= PAGE_SHIFT;
- table_end = table_start;
- table_top = table_start + (tables >> PAGE_SHIFT);
-
- printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n",
- end, table_start << PAGE_SHIFT, table_top << PAGE_SHIFT);
-}
-
-static void __init init_gbpages(void)
-{
- if (direct_gbpages && cpu_has_gbpages)
- printk(KERN_INFO "Using GB pages for direct mapping\n");
- else
- direct_gbpages = 0;
-}
-
-static unsigned long __meminit kernel_physical_mapping_init(unsigned long start,
- unsigned long end,
- unsigned long page_size_mask)
+unsigned long __init
+kernel_physical_mapping_init(unsigned long start,
+ unsigned long end,
+ unsigned long page_size_mask)
{
unsigned long next, last_map_addr = end;
return last_map_addr;
}
-struct map_range {
- unsigned long start;
- unsigned long end;
- unsigned page_size_mask;
-};
-
-#define NR_RANGE_MR 5
-
-static int save_mr(struct map_range *mr, int nr_range,
- unsigned long start_pfn, unsigned long end_pfn,
- unsigned long page_size_mask)
-{
-
- if (start_pfn < end_pfn) {
- if (nr_range >= NR_RANGE_MR)
- panic("run out of range for init_memory_mapping\n");
- mr[nr_range].start = start_pfn<<PAGE_SHIFT;
- mr[nr_range].end = end_pfn<<PAGE_SHIFT;
- mr[nr_range].page_size_mask = page_size_mask;
- nr_range++;
- }
-
- return nr_range;
-}
-
-/*
- * Setup the direct mapping of the physical memory at PAGE_OFFSET.
- * This runs before bootmem is initialized and gets pages directly from
- * the physical memory. To access them they are temporarily mapped.
- */
-unsigned long __init_refok init_memory_mapping(unsigned long start,
- unsigned long end)
-{
- unsigned long last_map_addr = 0;
- unsigned long page_size_mask = 0;
- unsigned long start_pfn, end_pfn;
- unsigned long pos;
-
- struct map_range mr[NR_RANGE_MR];
- int nr_range, i;
- int use_pse, use_gbpages;
-
- printk(KERN_INFO "init_memory_mapping: %016lx-%016lx\n", start, end);
-
- /*
- * Find space for the kernel direct mapping tables.
- *
- * Later we should allocate these tables in the local node of the
- * memory mapped. Unfortunately this is done currently before the
- * nodes are discovered.
- */
- if (!after_bootmem)
- init_gbpages();
-
-#ifdef CONFIG_DEBUG_PAGEALLOC
- /*
- * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
- * This will simplify cpa(), which otherwise needs to support splitting
- * large pages into small in interrupt context, etc.
- */
- use_pse = use_gbpages = 0;
-#else
- use_pse = cpu_has_pse;
- use_gbpages = direct_gbpages;
-#endif
-
- if (use_gbpages)
- page_size_mask |= 1 << PG_LEVEL_1G;
- if (use_pse)
- page_size_mask |= 1 << PG_LEVEL_2M;
-
- memset(mr, 0, sizeof(mr));
- nr_range = 0;
-
- /* head if not big page alignment ?*/
- start_pfn = start >> PAGE_SHIFT;
- pos = start_pfn << PAGE_SHIFT;
- end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT)
- << (PMD_SHIFT - PAGE_SHIFT);
- if (start_pfn < end_pfn) {
- nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
- pos = end_pfn << PAGE_SHIFT;
- }
-
- /* big page (2M) range*/
- start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT)
- << (PMD_SHIFT - PAGE_SHIFT);
- end_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT)
- << (PUD_SHIFT - PAGE_SHIFT);
- if (end_pfn > ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT)))
- end_pfn = ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT));
- if (start_pfn < end_pfn) {
- nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
- page_size_mask & (1<<PG_LEVEL_2M));
- pos = end_pfn << PAGE_SHIFT;
- }
-
- /* big page (1G) range */
- start_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT)
- << (PUD_SHIFT - PAGE_SHIFT);
- end_pfn = (end >> PUD_SHIFT) << (PUD_SHIFT - PAGE_SHIFT);
- if (start_pfn < end_pfn) {
- nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
- page_size_mask &
- ((1<<PG_LEVEL_2M)|(1<<PG_LEVEL_1G)));
- pos = end_pfn << PAGE_SHIFT;
- }
-
- /* tail is not big page (1G) alignment */
- start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT)
- << (PMD_SHIFT - PAGE_SHIFT);
- end_pfn = (end >> PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT);
- if (start_pfn < end_pfn) {
- nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
- page_size_mask & (1<<PG_LEVEL_2M));
- pos = end_pfn << PAGE_SHIFT;
- }
-
- /* tail is not big page (2M) alignment */
- start_pfn = pos>>PAGE_SHIFT;
- end_pfn = end>>PAGE_SHIFT;
- nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
-
- /* try to merge same page size and continuous */
- for (i = 0; nr_range > 1 && i < nr_range - 1; i++) {
- unsigned long old_start;
- if (mr[i].end != mr[i+1].start ||
- mr[i].page_size_mask != mr[i+1].page_size_mask)
- continue;
- /* move it */
- old_start = mr[i].start;
- memmove(&mr[i], &mr[i+1],
- (nr_range - 1 - i) * sizeof (struct map_range));
- mr[i--].start = old_start;
- nr_range--;
- }
-
- for (i = 0; i < nr_range; i++)
- printk(KERN_DEBUG " %010lx - %010lx page %s\n",
- mr[i].start, mr[i].end,
- (mr[i].page_size_mask & (1<<PG_LEVEL_1G))?"1G":(
- (mr[i].page_size_mask & (1<<PG_LEVEL_2M))?"2M":"4k"));
-
- if (!after_bootmem)
- find_early_table_space(end, use_pse, use_gbpages);
-
- for (i = 0; i < nr_range; i++)
- last_map_addr = kernel_physical_mapping_init(
- mr[i].start, mr[i].end,
- mr[i].page_size_mask);
-
- if (!after_bootmem)
- mmu_cr4_features = read_cr4();
- __flush_tlb_all();
-
- if (!after_bootmem && table_end > table_start)
- reserve_early(table_start << PAGE_SHIFT,
- table_end << PAGE_SHIFT, "PGTABLE");
-
- printk(KERN_INFO "last_map_addr: %lx end: %lx\n",
- last_map_addr, end);
-
- if (!after_bootmem)
- early_memtest(start, end);
-
- return last_map_addr >> PAGE_SHIFT;
-}
-
#ifndef CONFIG_NUMA
void __init initmem_init(unsigned long start_pfn, unsigned long end_pfn)
{
#endif /* CONFIG_MEMORY_HOTPLUG */
-/*
- * devmem_is_allowed() checks to see if /dev/mem access to a certain address
- * is valid. The argument is a physical page number.
- *
- *
- * On x86, access has to be given to the first megabyte of ram because that area
- * contains bios code and data regions used by X and dosemu and similar apps.
- * Access has to be given to non-kernel-ram areas as well, these contain the PCI
- * mmio resources as well as potential bios/acpi data regions.
- */
-int devmem_is_allowed(unsigned long pagenr)
-{
- if (pagenr <= 256)
- return 1;
- if (iomem_is_exclusive(pagenr << PAGE_SHIFT))
- return 0;
- if (!page_is_ram(pagenr))
- return 1;
- return 0;
-}
-
-
static struct kcore_list kcore_mem, kcore_vmalloc, kcore_kernel,
kcore_modules, kcore_vsyscall;
initsize >> 10);
}
-void free_init_pages(char *what, unsigned long begin, unsigned long end)
-{
- unsigned long addr = begin;
-
- if (addr >= end)
- return;
-
- /*
- * If debugging page accesses then do not free this memory but
- * mark them not present - any buggy init-section access will
- * create a kernel page fault:
- */
-#ifdef CONFIG_DEBUG_PAGEALLOC
- printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
- begin, PAGE_ALIGN(end));
- set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
-#else
- printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
-
- for (; addr < end; addr += PAGE_SIZE) {
- ClearPageReserved(virt_to_page(addr));
- init_page_count(virt_to_page(addr));
- memset((void *)(addr & ~(PAGE_SIZE-1)),
- POISON_FREE_INITMEM, PAGE_SIZE);
- free_page(addr);
- totalram_pages++;
- }
-#endif
-}
-
-void free_initmem(void)
-{
- free_init_pages("unused kernel memory",
- (unsigned long)(&__init_begin),
- (unsigned long)(&__init_end));
-}
-
#ifdef CONFIG_DEBUG_RODATA
const int rodata_test_data = 0xC3;
EXPORT_SYMBOL_GPL(rodata_test_data);
#endif
-#ifdef CONFIG_BLK_DEV_INITRD
-void free_initrd_mem(unsigned long start, unsigned long end)
-{
- free_init_pages("initrd memory", start, end);
-}
-#endif
-
int __init reserve_bootmem_generic(unsigned long phys, unsigned long len,
int flags)
{
} else {
VIRTUAL_BUG_ON(x < PAGE_OFFSET);
x -= PAGE_OFFSET;
- VIRTUAL_BUG_ON(system_state == SYSTEM_BOOTING ? x > MAXMEM :
- !phys_addr_valid(x));
+ VIRTUAL_BUG_ON(!phys_addr_valid(x));
}
return x;
}
if (x < PAGE_OFFSET)
return false;
x -= PAGE_OFFSET;
- if (system_state == SYSTEM_BOOTING ?
- x > MAXMEM : !phys_addr_valid(x)) {
+ if (!phys_addr_valid(x))
return false;
- }
}
return pfn_valid(x >> PAGE_SHIFT);
#ifdef CONFIG_DEBUG_VIRTUAL
unsigned long __phys_addr(unsigned long x)
{
- /* VMALLOC_* aren't constants; not available at the boot time */
+ /* VMALLOC_* aren't constants */
VIRTUAL_BUG_ON(x < PAGE_OFFSET);
- VIRTUAL_BUG_ON(system_state != SYSTEM_BOOTING &&
- is_vmalloc_addr((void *) x));
+ VIRTUAL_BUG_ON(__vmalloc_start_set && is_vmalloc_addr((void *) x));
return x - PAGE_OFFSET;
}
EXPORT_SYMBOL(__phys_addr);
{
if (x < PAGE_OFFSET)
return false;
- if (system_state != SYSTEM_BOOTING && is_vmalloc_addr((void *) x))
+ if (__vmalloc_start_set && is_vmalloc_addr((void *) x))
+ return false;
+ if (x >= FIXADDR_START)
return false;
return pfn_valid((x - PAGE_OFFSET) >> PAGE_SHIFT);
}
return &bm_pte[pte_index(addr)];
}
+static unsigned long slot_virt[FIX_BTMAPS_SLOTS] __initdata;
+
void __init early_ioremap_init(void)
{
pmd_t *pmd;
+ int i;
if (early_ioremap_debug)
printk(KERN_INFO "early_ioremap_init()\n");
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
+ slot_virt[i] = fix_to_virt(FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*i);
+
pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
memset(bm_pte, 0, sizeof(bm_pte));
pmd_populate_kernel(&init_mm, pmd, bm_pte);
static void __iomem *prev_map[FIX_BTMAPS_SLOTS] __initdata;
static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata;
+
static int __init check_early_ioremap_leak(void)
{
int count = 0;
}
late_initcall(check_early_ioremap_leak);
-static void __init __iomem *__early_ioremap(unsigned long phys_addr, unsigned long size, pgprot_t prot)
+static void __init __iomem *
+__early_ioremap(unsigned long phys_addr, unsigned long size, pgprot_t prot)
{
unsigned long offset, last_addr;
unsigned int nrpages;
--nrpages;
}
if (early_ioremap_debug)
- printk(KERN_CONT "%08lx + %08lx\n", offset, fix_to_virt(idx0));
+ printk(KERN_CONT "%08lx + %08lx\n", offset, slot_virt[slot]);
- prev_map[slot] = (void __iomem *)(offset + fix_to_virt(idx0));
+ prev_map[slot] = (void __iomem *)(offset + slot_virt[slot]);
return prev_map[slot];
}
}
prev_map[slot] = NULL;
}
-
-void __this_fixmap_does_not_exist(void)
-{
- WARN_ON(1);
-}
struct list_head list;
struct kmmio_fault_page *release_next;
unsigned long page; /* location of the fault page */
+ bool old_presence; /* page presence prior to arming */
+ bool armed;
/*
* Number of times this page has been registered as a part
* of a probe. If zero, page is disarmed and this may be freed.
- * Used only by writers (RCU).
+ * Used only by writers (RCU) and post_kmmio_handler().
+ * Protected by kmmio_lock, when linked into kmmio_page_table.
*/
int count;
};
return NULL;
}
-static void set_page_present(unsigned long addr, bool present,
- unsigned int *pglevel)
+static void set_pmd_presence(pmd_t *pmd, bool present, bool *old)
+{
+ pmdval_t v = pmd_val(*pmd);
+ *old = !!(v & _PAGE_PRESENT);
+ v &= ~_PAGE_PRESENT;
+ if (present)
+ v |= _PAGE_PRESENT;
+ set_pmd(pmd, __pmd(v));
+}
+
+static void set_pte_presence(pte_t *pte, bool present, bool *old)
+{
+ pteval_t v = pte_val(*pte);
+ *old = !!(v & _PAGE_PRESENT);
+ v &= ~_PAGE_PRESENT;
+ if (present)
+ v |= _PAGE_PRESENT;
+ set_pte_atomic(pte, __pte(v));
+}
+
+static int set_page_presence(unsigned long addr, bool present, bool *old)
{
- pteval_t pteval;
- pmdval_t pmdval;
unsigned int level;
- pmd_t *pmd;
pte_t *pte = lookup_address(addr, &level);
if (!pte) {
pr_err("kmmio: no pte for page 0x%08lx\n", addr);
- return;
+ return -1;
}
- if (pglevel)
- *pglevel = level;
-
switch (level) {
case PG_LEVEL_2M:
- pmd = (pmd_t *)pte;
- pmdval = pmd_val(*pmd) & ~_PAGE_PRESENT;
- if (present)
- pmdval |= _PAGE_PRESENT;
- set_pmd(pmd, __pmd(pmdval));
+ set_pmd_presence((pmd_t *)pte, present, old);
break;
-
case PG_LEVEL_4K:
- pteval = pte_val(*pte) & ~_PAGE_PRESENT;
- if (present)
- pteval |= _PAGE_PRESENT;
- set_pte_atomic(pte, __pte(pteval));
+ set_pte_presence(pte, present, old);
break;
-
default:
pr_err("kmmio: unexpected page level 0x%x.\n", level);
- return;
+ return -1;
}
__flush_tlb_one(addr);
+ return 0;
}
-/** Mark the given page as not present. Access to it will trigger a fault. */
-static void arm_kmmio_fault_page(unsigned long page, unsigned int *pglevel)
+/*
+ * Mark the given page as not present. Access to it will trigger a fault.
+ *
+ * Struct kmmio_fault_page is protected by RCU and kmmio_lock, but the
+ * protection is ignored here. RCU read lock is assumed held, so the struct
+ * will not disappear unexpectedly. Furthermore, the caller must guarantee,
+ * that double arming the same virtual address (page) cannot occur.
+ *
+ * Double disarming on the other hand is allowed, and may occur when a fault
+ * and mmiotrace shutdown happen simultaneously.
+ */
+static int arm_kmmio_fault_page(struct kmmio_fault_page *f)
{
- set_page_present(page & PAGE_MASK, false, pglevel);
+ int ret;
+ WARN_ONCE(f->armed, KERN_ERR "kmmio page already armed.\n");
+ if (f->armed) {
+ pr_warning("kmmio double-arm: page 0x%08lx, ref %d, old %d\n",
+ f->page, f->count, f->old_presence);
+ }
+ ret = set_page_presence(f->page, false, &f->old_presence);
+ WARN_ONCE(ret < 0, KERN_ERR "kmmio arming 0x%08lx failed.\n", f->page);
+ f->armed = true;
+ return ret;
}
-/** Mark the given page as present. */
-static void disarm_kmmio_fault_page(unsigned long page, unsigned int *pglevel)
+/** Restore the given page to saved presence state. */
+static void disarm_kmmio_fault_page(struct kmmio_fault_page *f)
{
- set_page_present(page & PAGE_MASK, true, pglevel);
+ bool tmp;
+ int ret = set_page_presence(f->page, f->old_presence, &tmp);
+ WARN_ONCE(ret < 0,
+ KERN_ERR "kmmio disarming 0x%08lx failed.\n", f->page);
+ f->armed = false;
}
/*
ctx = &get_cpu_var(kmmio_ctx);
if (ctx->active) {
- disarm_kmmio_fault_page(faultpage->page, NULL);
if (addr == ctx->addr) {
/*
- * On SMP we sometimes get recursive probe hits on the
- * same address. Context is already saved, fall out.
+ * A second fault on the same page means some other
+ * condition needs handling by do_page_fault(), the
+ * page really not being present is the most common.
*/
- pr_debug("kmmio: duplicate probe hit on CPU %d, for "
- "address 0x%08lx.\n",
- smp_processor_id(), addr);
- ret = 1;
- goto no_kmmio_ctx;
- }
- /*
- * Prevent overwriting already in-flight context.
- * This should not happen, let's hope disarming at least
- * prevents a panic.
- */
- pr_emerg("kmmio: recursive probe hit on CPU %d, "
+ pr_debug("kmmio: secondary hit for 0x%08lx CPU %d.\n",
+ addr, smp_processor_id());
+
+ if (!faultpage->old_presence)
+ pr_info("kmmio: unexpected secondary hit for "
+ "address 0x%08lx on CPU %d.\n", addr,
+ smp_processor_id());
+ } else {
+ /*
+ * Prevent overwriting already in-flight context.
+ * This should not happen, let's hope disarming at
+ * least prevents a panic.
+ */
+ pr_emerg("kmmio: recursive probe hit on CPU %d, "
"for address 0x%08lx. Ignoring.\n",
smp_processor_id(), addr);
- pr_emerg("kmmio: previous hit was at 0x%08lx.\n",
- ctx->addr);
+ pr_emerg("kmmio: previous hit was at 0x%08lx.\n",
+ ctx->addr);
+ disarm_kmmio_fault_page(faultpage);
+ }
goto no_kmmio_ctx;
}
ctx->active++;
regs->flags &= ~X86_EFLAGS_IF;
/* Now we set present bit in PTE and single step. */
- disarm_kmmio_fault_page(ctx->fpage->page, NULL);
+ disarm_kmmio_fault_page(ctx->fpage);
/*
* If another cpu accesses the same page while we are stepping,
struct kmmio_context *ctx = &get_cpu_var(kmmio_ctx);
if (!ctx->active) {
- pr_debug("kmmio: spurious debug trap on CPU %d.\n",
+ pr_warning("kmmio: spurious debug trap on CPU %d.\n",
smp_processor_id());
goto out;
}
if (ctx->probe && ctx->probe->post_handler)
ctx->probe->post_handler(ctx->probe, condition, regs);
- arm_kmmio_fault_page(ctx->fpage->page, NULL);
+ /* Prevent racing against release_kmmio_fault_page(). */
+ spin_lock(&kmmio_lock);
+ if (ctx->fpage->count)
+ arm_kmmio_fault_page(ctx->fpage);
+ spin_unlock(&kmmio_lock);
regs->flags &= ~X86_EFLAGS_TF;
regs->flags |= ctx->saved_flags;
f = get_kmmio_fault_page(page);
if (f) {
if (!f->count)
- arm_kmmio_fault_page(f->page, NULL);
+ arm_kmmio_fault_page(f);
f->count++;
return 0;
}
- f = kmalloc(sizeof(*f), GFP_ATOMIC);
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
return -1;
f->count = 1;
f->page = page;
- list_add_rcu(&f->list, kmmio_page_list(f->page));
- arm_kmmio_fault_page(f->page, NULL);
+ if (arm_kmmio_fault_page(f)) {
+ kfree(f);
+ return -1;
+ }
+
+ list_add_rcu(&f->list, kmmio_page_list(f->page));
return 0;
}
f->count--;
BUG_ON(f->count < 0);
if (!f->count) {
- disarm_kmmio_fault_page(f->page, NULL);
+ disarm_kmmio_fault_page(f);
f->release_next = *release_list;
*release_list = f;
}
static void remove_kmmio_fault_pages(struct rcu_head *head)
{
- struct kmmio_delayed_release *dr = container_of(
- head,
- struct kmmio_delayed_release,
- rcu);
+ struct kmmio_delayed_release *dr =
+ container_of(head, struct kmmio_delayed_release, rcu);
struct kmmio_fault_page *p = dr->release_list;
struct kmmio_fault_page **prevp = &dr->release_list;
unsigned long flags;
+
spin_lock_irqsave(&kmmio_lock, flags);
while (p) {
- if (!p->count)
+ if (!p->count) {
list_del_rcu(&p->list);
- else
+ prevp = &p->release_next;
+ } else {
*prevp = p->release_next;
- prevp = &p->release_next;
+ }
p = p->release_next;
}
spin_unlock_irqrestore(&kmmio_lock, flags);
+
/* This is the real RCU destroy call. */
call_rcu(&dr->rcu, rcu_free_kmmio_fault_pages);
}
{
if (arg)
memtest_pattern = simple_strtoul(arg, NULL, 0);
+ else
+ memtest_pattern = ARRAY_SIZE(patterns);
+
return 0;
}
for_each_online_node(nid)
propagate_e820_map_node(nid);
- for_each_online_node(nid)
+ for_each_online_node(nid) {
memset(NODE_DATA(nid), 0, sizeof(struct pglist_data));
+ NODE_DATA(nid)->bdata = &bootmem_node_data[nid];
+ }
- NODE_DATA(0)->bdata = &bootmem_node_data[0];
setup_bootmem_allocator();
}
-void __init set_highmem_pages_init(void)
-{
-#ifdef CONFIG_HIGHMEM
- struct zone *zone;
- int nid;
-
- for_each_zone(zone) {
- unsigned long zone_start_pfn, zone_end_pfn;
-
- if (!is_highmem(zone))
- continue;
-
- zone_start_pfn = zone->zone_start_pfn;
- zone_end_pfn = zone_start_pfn + zone->spanned_pages;
-
- nid = zone_to_nid(zone);
- printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n",
- zone->name, nid, zone_start_pfn, zone_end_pfn);
-
- add_highpages_with_active_regions(nid, zone_start_pfn,
- zone_end_pfn);
- }
- totalram_pages += totalhigh_pages;
-#endif
-}
-
#ifdef CONFIG_MEMORY_HOTPLUG
static int paddr_to_nid(u64 addr)
{
return young;
}
+/**
+ * reserve_top_address - reserves a hole in the top of kernel address space
+ * @reserve - size of hole to reserve
+ *
+ * Can be used to relocate the fixmap area and poke a hole in the top
+ * of kernel address space to make room for a hypervisor.
+ */
+void __init reserve_top_address(unsigned long reserve)
+{
+#ifdef CONFIG_X86_32
+ BUG_ON(fixmaps_set > 0);
+ printk(KERN_INFO "Reserving virtual address space above 0x%08x\n",
+ (int)-reserve);
+ __FIXADDR_TOP = -reserve - PAGE_SIZE;
+ __VMALLOC_RESERVE += reserve;
+#endif
+}
+
int fixmaps_set;
void __native_set_fixmap(enum fixed_addresses idx, pte_t pte)
#include <asm/tlb.h>
#include <asm/tlbflush.h>
+unsigned int __VMALLOC_RESERVE = 128 << 20;
+
/*
* Associate a virtual page frame with a given physical page frame
* and protection flags for that frame.
unsigned long __FIXADDR_TOP = 0xfffff000;
EXPORT_SYMBOL(__FIXADDR_TOP);
-/**
- * reserve_top_address - reserves a hole in the top of kernel address space
- * @reserve - size of hole to reserve
- *
- * Can be used to relocate the fixmap area and poke a hole in the top
- * of kernel address space to make room for a hypervisor.
- */
-void __init reserve_top_address(unsigned long reserve)
-{
- BUG_ON(fixmaps_set > 0);
- printk(KERN_INFO "Reserving virtual address space above 0x%08x\n",
- (int)-reserve);
- __FIXADDR_TOP = -reserve - PAGE_SIZE;
- __VMALLOC_RESERVE += reserve;
-}
-
/*
* vmalloc=size forces the vmalloc area to be exactly 'size'
* bytes. This can be used to increase (or decrease) the
/*
- * Written by Pekka Paalanen, 2008 <pq@iki.fi>
+ * Written by Pekka Paalanen, 2008-2009 <pq@iki.fi>
*/
#include <linux/module.h>
#include <linux/io.h>
static unsigned long mmio_address;
module_param(mmio_address, ulong, 0);
-MODULE_PARM_DESC(mmio_address, "Start address of the mapping of 16 kB.");
+MODULE_PARM_DESC(mmio_address, " Start address of the mapping of 16 kB "
+ "(or 8 MB if read_far is non-zero).");
+
+static unsigned long read_far = 0x400100;
+module_param(read_far, ulong, 0);
+MODULE_PARM_DESC(read_far, " Offset of a 32-bit read within 8 MB "
+ "(default: 0x400100).");
+
+static unsigned v16(unsigned i)
+{
+ return i * 12 + 7;
+}
+
+static unsigned v32(unsigned i)
+{
+ return i * 212371 + 13;
+}
static void do_write_test(void __iomem *p)
{
unsigned int i;
+ pr_info(MODULE_NAME ": write test.\n");
mmiotrace_printk("Write test.\n");
+
for (i = 0; i < 256; i++)
iowrite8(i, p + i);
+
for (i = 1024; i < (5 * 1024); i += 2)
- iowrite16(i * 12 + 7, p + i);
+ iowrite16(v16(i), p + i);
+
for (i = (5 * 1024); i < (16 * 1024); i += 4)
- iowrite32(i * 212371 + 13, p + i);
+ iowrite32(v32(i), p + i);
}
static void do_read_test(void __iomem *p)
{
unsigned int i;
+ unsigned errs[3] = { 0 };
+ pr_info(MODULE_NAME ": read test.\n");
mmiotrace_printk("Read test.\n");
+
for (i = 0; i < 256; i++)
- ioread8(p + i);
+ if (ioread8(p + i) != i)
+ ++errs[0];
+
for (i = 1024; i < (5 * 1024); i += 2)
- ioread16(p + i);
+ if (ioread16(p + i) != v16(i))
+ ++errs[1];
+
for (i = (5 * 1024); i < (16 * 1024); i += 4)
- ioread32(p + i);
+ if (ioread32(p + i) != v32(i))
+ ++errs[2];
+
+ mmiotrace_printk("Read errors: 8-bit %d, 16-bit %d, 32-bit %d.\n",
+ errs[0], errs[1], errs[2]);
}
-static void do_test(void)
+static void do_read_far_test(void __iomem *p)
{
- void __iomem *p = ioremap_nocache(mmio_address, 0x4000);
+ pr_info(MODULE_NAME ": read far test.\n");
+ mmiotrace_printk("Read far test.\n");
+
+ ioread32(p + read_far);
+}
+
+static void do_test(unsigned long size)
+{
+ void __iomem *p = ioremap_nocache(mmio_address, size);
if (!p) {
pr_err(MODULE_NAME ": could not ioremap, aborting.\n");
return;
mmiotrace_printk("ioremap returned %p.\n", p);
do_write_test(p);
do_read_test(p);
+ if (read_far && read_far < size - 4)
+ do_read_far_test(p);
iounmap(p);
}
static int __init init(void)
{
+ unsigned long size = (read_far) ? (8 << 20) : (16 << 10);
+
if (mmio_address == 0) {
pr_err(MODULE_NAME ": you have to use the module argument "
"mmio_address.\n");
return -ENXIO;
}
- pr_warning(MODULE_NAME ": WARNING: mapping 16 kB @ 0x%08lx "
- "in PCI address space, and writing "
- "rubbish in there.\n", mmio_address);
- do_test();
+ pr_warning(MODULE_NAME ": WARNING: mapping %lu kB @ 0x%08lx in PCI "
+ "address space, and writing 16 kB of rubbish in there.\n",
+ size >> 10, mmio_address);
+ do_test(size);
+ pr_info(MODULE_NAME ": All done.\n");
return 0;
}
if (cpu_has_arch_perfmon) {
union cpuid10_eax eax;
eax.full = cpuid_eax(0xa);
- if (counter_width < eax.split.bit_width)
- counter_width = eax.split.bit_width;
+
+ /*
+ * For Core2 (family 6, model 15), don't reset the
+ * counter width:
+ */
+ if (!(eax.split.version_id == 0 &&
+ current_cpu_data.x86 == 6 &&
+ current_cpu_data.x86_model == 15)) {
+
+ if (counter_width < eax.split.bit_width)
+ counter_width = eax.split.bit_width;
+ }
}
/* clear all counters */
vcpup = &per_cpu(xen_vcpu_info, cpu);
- info.mfn = virt_to_mfn(vcpup);
+ info.mfn = arbitrary_virt_to_mfn(vcpup);
info.offset = offset_in_page(vcpup);
printk(KERN_DEBUG "trying to map vcpu_info %d at %p, mfn %llx, offset %d\n",
frames = mcs.args;
for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) {
- frames[f] = virt_to_mfn(va);
+ frames[f] = arbitrary_virt_to_mfn((void *)va);
+
make_lowmem_page_readonly((void *)va);
+ make_lowmem_page_readonly(mfn_to_virt(frames[f]));
}
MULTI_set_gdt(mcs.mc, frames, size / sizeof(struct desc_struct));
unsigned int cpu, unsigned int i)
{
struct desc_struct *gdt = get_cpu_gdt_table(cpu);
- xmaddr_t maddr = virt_to_machine(&gdt[GDT_ENTRY_TLS_MIN+i]);
+ xmaddr_t maddr = arbitrary_virt_to_machine(&gdt[GDT_ENTRY_TLS_MIN+i]);
struct multicall_space mc = __xen_mc_entry(0);
MULTI_update_descriptor(mc.mc, maddr.maddr, t->tls_array[i]);
break;
default: {
- xmaddr_t maddr = virt_to_machine(&dt[entry]);
+ xmaddr_t maddr = arbitrary_virt_to_machine(&dt[entry]);
xen_mc_flush();
if (HYPERVISOR_update_descriptor(maddr.maddr, *(u64 *)desc))
p2m_top[topidx][idx] = mfn;
}
+unsigned long arbitrary_virt_to_mfn(void *vaddr)
+{
+ xmaddr_t maddr = arbitrary_virt_to_machine(vaddr);
+
+ return PFN_DOWN(maddr.maddr);
+}
+
xmaddr_t arbitrary_virt_to_machine(void *vaddr)
{
unsigned long address = (unsigned long)vaddr;
{
struct vcpu_guest_context *ctxt;
struct desc_struct *gdt;
+ unsigned long gdt_mfn;
if (cpumask_test_and_set_cpu(cpu, xen_cpu_initialized_map))
return 0;
ctxt->ldt_ents = 0;
BUG_ON((unsigned long)gdt & ~PAGE_MASK);
+
+ gdt_mfn = arbitrary_virt_to_mfn(gdt);
make_lowmem_page_readonly(gdt);
+ make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
- ctxt->gdt_frames[0] = virt_to_mfn(gdt);
+ ctxt->gdt_frames[0] = gdt_mfn;
ctxt->gdt_ents = GDT_ENTRIES;
ctxt->user_regs.cs = __KERNEL_CS;
help
Can we use information of configuration file?
-config HIGHMEM
- bool "High memory support"
-
endmenu
menu "Platform options"
#include <asm/setup.h>
#include <asm/param.h>
+#include <platform/hardware.h>
+
#if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_DUMMY_CONSOLE)
struct screen_info screen_info = { 0, 24, 0, 0, 0, 80, 0, 0, 0, 24, 1, 16};
#endif
#include <linux/stringify.h>
#include <linux/kallsyms.h>
#include <linux/delay.h>
+#include <linux/hardirq.h>
#include <asm/ptrace.h>
#include <asm/timex.h>
#include <linux/mm.h>
#include <linux/module.h>
+#include <linux/hardirq.h>
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
#include <asm/hardirq.h>
}
-static void rs_put_char(struct tty_struct *tty, unsigned char ch)
+static int rs_put_char(struct tty_struct *tty, unsigned char ch)
{
char buf[2];
- if (!tty)
- return;
-
buf[0] = ch;
buf[1] = '\0'; /* Is this NULL necessary? */
__simc (SYS_write, 1, (unsigned long) buf, 1, 0, 0);
+ return 1;
}
static void rs_flush_chars(struct tty_struct *tty)
}
static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
- struct bio *bio,
- unsigned int *seg_size_ptr)
+ struct bio *bio)
{
unsigned int phys_size;
struct bio_vec *bv, *bvprv = NULL;
int cluster, i, high, highprv = 1;
unsigned int seg_size, nr_phys_segs;
- struct bio *fbio;
+ struct bio *fbio, *bbio;
if (!bio)
return 0;
seg_size = bv->bv_len;
highprv = high;
}
+ bbio = bio;
}
- if (seg_size_ptr)
- *seg_size_ptr = seg_size;
+ if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size)
+ fbio->bi_seg_front_size = seg_size;
+ if (seg_size > bbio->bi_seg_back_size)
+ bbio->bi_seg_back_size = seg_size;
return nr_phys_segs;
}
void blk_recalc_rq_segments(struct request *rq)
{
- unsigned int seg_size = 0, phys_segs;
-
- phys_segs = __blk_recalc_rq_segments(rq->q, rq->bio, &seg_size);
-
- if (phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size)
- rq->bio->bi_seg_front_size = seg_size;
- if (seg_size > rq->biotail->bi_seg_back_size)
- rq->biotail->bi_seg_back_size = seg_size;
-
- rq->nr_phys_segments = phys_segs;
+ rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio);
}
void blk_recount_segments(struct request_queue *q, struct bio *bio)
struct bio *nxt = bio->bi_next;
bio->bi_next = NULL;
- bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, NULL);
+ bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio);
bio->bi_next = nxt;
bio->bi_flags |= (1 << BIO_SEG_VALID);
}
if (!bt->sequence)
goto err;
- bt->msg_data = __alloc_percpu(BLK_TN_MAX_MSG);
+ bt->msg_data = __alloc_percpu(BLK_TN_MAX_MSG, __alignof__(char));
if (!bt->msg_data)
goto err;
mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
type &= mask;
- alg = try_then_request_module(crypto_alg_lookup(name, type, mask),
- name);
+ alg = crypto_alg_lookup(name, type, mask);
+ if (!alg) {
+ char tmp[CRYPTO_MAX_ALG_NAME];
+
+ request_module(name);
+
+ if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask) &&
+ snprintf(tmp, sizeof(tmp), "%s-all", name) < sizeof(tmp))
+ request_module(tmp);
+
+ alg = crypto_alg_lookup(name, type, mask);
+ }
+
if (alg)
return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg;
continue;
}
- if (!performance || !percpu_ptr(performance, i)) {
+ if (!performance || !per_cpu_ptr(performance, i)) {
retval = -EINVAL;
continue;
}
- pr->performance = percpu_ptr(performance, i);
+ pr->performance = per_cpu_ptr(performance, i);
cpumask_set_cpu(i, pr->performance->shared_cpu_map);
if (acpi_processor_get_psd(pr)) {
retval = -EINVAL;
{ PCI_VDEVICE(NVIDIA, 0x0abd), board_ahci }, /* MCP79 */
{ PCI_VDEVICE(NVIDIA, 0x0abe), board_ahci }, /* MCP79 */
{ PCI_VDEVICE(NVIDIA, 0x0abf), board_ahci }, /* MCP79 */
- { PCI_VDEVICE(NVIDIA, 0x0bc8), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bc9), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bca), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bcb), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bcc), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bcd), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bce), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bcf), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bc4), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bc5), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bc6), board_ahci }, /* MCP7B */
- { PCI_VDEVICE(NVIDIA, 0x0bc7), board_ahci }, /* MCP7B */
+ { PCI_VDEVICE(NVIDIA, 0x0d84), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d85), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d86), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d87), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d88), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d89), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d8a), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d8b), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d8c), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d8d), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d8e), board_ahci }, /* MCP89 */
+ { PCI_VDEVICE(NVIDIA, 0x0d8f), board_ahci }, /* MCP89 */
/* SiS */
{ PCI_VDEVICE(SI, 0x1184), board_ahci }, /* SiS 966 */
{
if (ata_id_has_lba(id)) {
if (ata_id_has_lba48(id))
- return ata_id_u64(id, 100);
+ return ata_id_u64(id, ATA_ID_LBA_CAPACITY_2);
else
- return ata_id_u32(id, 60);
+ return ata_id_u32(id, ATA_ID_LBA_CAPACITY);
} else {
if (ata_id_current_chs_valid(id))
- return ata_id_u32(id, 57);
+ return id[ATA_ID_CUR_CYLS] * id[ATA_ID_CUR_HEADS] *
+ id[ATA_ID_CUR_SECTORS];
else
- return id[1] * id[3] * id[6];
+ return id[ATA_ID_CYLS] * id[ATA_ID_HEADS] *
+ id[ATA_ID_SECTORS];
}
}
VPRINTK("unmapping %u sg elements\n", qc->n_elem);
if (qc->n_elem)
- dma_unmap_sg(ap->dev, sg, qc->n_elem, dir);
+ dma_unmap_sg(ap->dev, sg, qc->orig_n_elem, dir);
qc->flags &= ~ATA_QCFLAG_DMAMAP;
qc->sg = NULL;
return -1;
DPRINTK("%d sg elements mapped\n", n_elem);
-
+ qc->orig_n_elem = qc->n_elem;
qc->n_elem = n_elem;
qc->flags |= ATA_QCFLAG_DMAMAP;
}
/* prereset() might have cleared ATA_EH_RESET. If so,
- * bang classes and return.
+ * bang classes, thaw and return.
*/
if (reset && !(ehc->i.action & ATA_EH_RESET)) {
ata_for_each_dev(dev, link, ALL)
classes[dev->devno] = ATA_DEV_NONE;
+ if ((ap->pflags & ATA_PFLAG_FROZEN) &&
+ ata_is_host_link(link))
+ ata_eh_thaw_port(ap);
rc = 0;
goto out;
}
int i;
for (i = 0; i < ATA_EH_UA_TRIES; i++) {
- u8 sense_buffer[SCSI_SENSE_BUFFERSIZE];
+ u8 *sense_buffer = dev->link->ap->sector_buf;
u8 sense_key = 0;
unsigned int err_mask;
module_init(nv_init);
module_exit(nv_exit);
module_param_named(adma, adma_enabled, bool, 0444);
-MODULE_PARM_DESC(adma, "Enable use of ADMA (Default: true)");
+MODULE_PARM_DESC(adma, "Enable use of ADMA (Default: false)");
module_param_named(swncq, swncq_enabled, bool, 0444);
MODULE_PARM_DESC(swncq, "Enable use of SWNCQ (Default: true)");
sect_start_pfn = section_nr_to_pfn(mem_blk->phys_index);
sect_end_pfn = sect_start_pfn + PAGES_PER_SECTION - 1;
for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
- unsigned int nid;
+ int nid;
nid = get_nid_for_pfn(pfn);
if (nid < 0)
return;
while (atomic_read(&skb_shinfo(skb)->dataref) != 1 && i-- > 0)
msleep(Sms);
- if (i <= 0) {
+ if (i < 0) {
printk(KERN_ERR
"aoe: %s holds ref: %s\n",
skb->dev ? skb->dev->name : "netif",
if (cciss_hard_reset_controller(pdev) || cciss_reset_msi(pdev))
return -ENODEV;
- /* Some devices (notably the HP Smart Array 5i Controller)
- need a little pause here */
- schedule_timeout_uninterruptible(30*HZ);
-
- /* Now try to get the controller to respond to a no-op */
+ /* Now try to get the controller to respond to a no-op. Some
+ devices (notably the HP Smart Array 5i Controller) need
+ up to 30 seconds to respond. */
for (i=0; i<30; i++) {
if (cciss_noop(pdev) == 0)
break;
struct loop_device *lo = p->lo;
struct page *page = buf->page;
sector_t IV;
- size_t size;
- int ret;
+ int size, ret;
ret = buf->ops->confirm(pipe, buf);
if (unlikely(ret))
break;
case XenbusStateClosing:
+ if (info->gd == NULL)
+ xenbus_dev_fatal(dev, -ENODEV, "gd is NULL");
bd = bdget_disk(info->gd, 0);
if (bd == NULL)
xenbus_dev_fatal(dev, -ENODEV, "bdget failed");
nb_order = (nb_order >> 1) & 7;
pci_read_config_dword(nb, AMD64_GARTAPERTUREBASE, &nb_base);
nb_aper = nb_base << 25;
- if (agp_aperture_valid(nb_aper, (32*1024*1024)<<nb_order)) {
- return 0;
- }
/* Northbridge seems to contain crap. Try the AGP bridge. */
pci_read_config_word(agp, cap+0x14, &apsize);
- if (apsize == 0xffff)
+ if (apsize == 0xffff) {
+ if (agp_aperture_valid(nb_aper, (32*1024*1024)<<nb_order))
+ return 0;
return -1;
+ }
apsize &= 0xfff;
/* Some BIOS use weird encodings not in the AGPv3 table. */
order = nb_order;
}
+ if (nb_order >= order) {
+ if (agp_aperture_valid(nb_aper, (32*1024*1024)<<nb_order))
+ return 0;
+ }
+
dev_info(&agp->dev, "aperture from AGP @ %Lx size %u MB\n",
aper, 32 << order);
if (order < 0 || !agp_aperture_valid(aper, (32*1024*1024)<<order))
break;
}
}
- if (gtt_entries > 0)
+ if (gtt_entries > 0) {
dev_info(&agp_bridge->dev->dev, "detected %dK %s memory\n",
gtt_entries / KB(1), local ? "local" : "stolen");
- else
+ gtt_entries /= KB(4);
+ } else {
dev_info(&agp_bridge->dev->dev,
"no pre-allocated video memory detected\n");
- gtt_entries /= KB(4);
+ gtt_entries = 0;
+ }
intel_private.gtt_entries = gtt_entries;
}
.release = cpufreq_sysfs_release,
};
-static struct kobj_type ktype_empty_cpufreq = {
- .sysfs_ops = &sysfs_ops,
- .release = cpufreq_sysfs_release,
-};
-
/**
* cpufreq_add_dev - add a CPU device
memcpy(&new_policy, policy, sizeof(struct cpufreq_policy));
/* prepare interface data */
- if (!cpufreq_driver->hide_interface) {
- ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq,
- &sys_dev->kobj, "cpufreq");
+ ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, &sys_dev->kobj,
+ "cpufreq");
+ if (ret)
+ goto err_out_driver_exit;
+
+ /* set up files for this cpu device */
+ drv_attr = cpufreq_driver->attr;
+ while ((drv_attr) && (*drv_attr)) {
+ ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr));
if (ret)
goto err_out_driver_exit;
-
- /* set up files for this cpu device */
- drv_attr = cpufreq_driver->attr;
- while ((drv_attr) && (*drv_attr)) {
- ret = sysfs_create_file(&policy->kobj,
- &((*drv_attr)->attr));
- if (ret)
- goto err_out_driver_exit;
- drv_attr++;
- }
- if (cpufreq_driver->get) {
- ret = sysfs_create_file(&policy->kobj,
- &cpuinfo_cur_freq.attr);
- if (ret)
- goto err_out_driver_exit;
- }
- if (cpufreq_driver->target) {
- ret = sysfs_create_file(&policy->kobj,
- &scaling_cur_freq.attr);
- if (ret)
- goto err_out_driver_exit;
- }
- } else {
- ret = kobject_init_and_add(&policy->kobj, &ktype_empty_cpufreq,
- &sys_dev->kobj, "cpufreq");
+ drv_attr++;
+ }
+ if (cpufreq_driver->get) {
+ ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr);
+ if (ret)
+ goto err_out_driver_exit;
+ }
+ if (cpufreq_driver->target) {
+ ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr);
if (ret)
goto err_out_driver_exit;
}
if (!ctx_pool) {
goto err;
}
- ret = qmgr_request_queue(SEND_QID, NPE_QLEN_TOTAL, 0, 0);
+ ret = qmgr_request_queue(SEND_QID, NPE_QLEN_TOTAL, 0, 0,
+ "ixp_crypto:out", NULL);
if (ret)
goto err;
- ret = qmgr_request_queue(RECV_QID, NPE_QLEN, 0, 0);
+ ret = qmgr_request_queue(RECV_QID, NPE_QLEN, 0, 0,
+ "ixp_crypto:in", NULL);
if (ret) {
qmgr_release_queue(SEND_QID);
goto err;
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Michal Ludvig");
-MODULE_ALIAS("aes");
+MODULE_ALIAS("aes-all");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Michal Ludvig");
-MODULE_ALIAS("sha1");
-MODULE_ALIAS("sha256");
+MODULE_ALIAS("sha1-all");
+MODULE_ALIAS("sha256-all");
MODULE_ALIAS("sha1-padlock");
MODULE_ALIAS("sha256-padlock");
/*
- * Copyright(c) 2007 Intel Corporation. All rights reserved.
+ * Copyright(c) 2007 - 2009 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
static void __exit dmatest_exit(void)
{
struct dmatest_chan *dtc, *_dtc;
+ struct dma_chan *chan;
list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) {
list_del(&dtc->node);
+ chan = dtc->chan;
dmatest_cleanup_channel(dtc);
pr_debug("dmatest: dropped channel %s\n",
- dma_chan_name(dtc->chan));
- dma_release_channel(dtc->chan);
+ dma_chan_name(chan));
+ dma_release_channel(chan);
}
}
module_exit(dmatest_exit);
static void dma_halt(struct fsl_dma_chan *fsl_chan)
{
- int i = 0;
+ int i;
+
DMA_OUT(fsl_chan, &fsl_chan->reg_base->mr,
DMA_IN(fsl_chan, &fsl_chan->reg_base->mr, 32) | FSL_DMA_MR_CA,
32);
DMA_IN(fsl_chan, &fsl_chan->reg_base->mr, 32) & ~(FSL_DMA_MR_CS
| FSL_DMA_MR_EMS_EN | FSL_DMA_MR_CA), 32);
- while (!dma_is_idle(fsl_chan) && (i++ < 100))
+ for (i = 0; i < 100; i++) {
+ if (dma_is_idle(fsl_chan))
+ break;
udelay(10);
+ }
if (i >= 100 && !dma_is_idle(fsl_chan))
dev_err(fsl_chan->dev, "DMA halt timeout!\n");
}
/*
* Intel I/OAT DMA Linux driver
- * Copyright(c) 2007 Intel Corporation.
+ * Copyright(c) 2007 - 2009 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
/*
* Intel I/OAT DMA Linux driver
- * Copyright(c) 2007 Intel Corporation.
+ * Copyright(c) 2007 - 2009 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
#define DCA_TAG_MAP_MASK 0xDF
+/* expected tag map bytes for I/OAT ver.2 */
+#define DCA2_TAG_MAP_BYTE0 0x80
+#define DCA2_TAG_MAP_BYTE1 0x0
+#define DCA2_TAG_MAP_BYTE2 0x81
+#define DCA2_TAG_MAP_BYTE3 0x82
+#define DCA2_TAG_MAP_BYTE4 0x82
+
+/* verify if tag map matches expected values */
+static inline int dca2_tag_map_valid(u8 *tag_map)
+{
+ return ((tag_map[0] == DCA2_TAG_MAP_BYTE0) &&
+ (tag_map[1] == DCA2_TAG_MAP_BYTE1) &&
+ (tag_map[2] == DCA2_TAG_MAP_BYTE2) &&
+ (tag_map[3] == DCA2_TAG_MAP_BYTE3) &&
+ (tag_map[4] == DCA2_TAG_MAP_BYTE4));
+}
+
/*
* "Legacy" DCA systems do not implement the DCA register set in the
* I/OAT device. Software needs direct support for their tag mappings.
ioatdca->tag_map[i] = 0;
}
+ if (!dca2_tag_map_valid(ioatdca->tag_map)) {
+ dev_err(&pdev->dev, "APICID_TAG_MAP set incorrectly by BIOS, "
+ "disabling DCA\n");
+ free_dca_provider(dca);
+ return NULL;
+ }
+
err = register_dca_provider(dca, &pdev->dev);
if (err) {
free_dca_provider(dca);
/*
* Intel I/OAT DMA Linux driver
- * Copyright(c) 2004 - 2007 Intel Corporation.
+ * Copyright(c) 2004 - 2009 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
ioat_chan->xfercap = xfercap;
ioat_chan->desccount = 0;
INIT_DELAYED_WORK(&ioat_chan->work, ioat_dma_chan_reset_part2);
- if (ioat_chan->device->version != IOAT_VER_1_2) {
- writel(IOAT_DCACTRL_CMPL_WRITE_ENABLE
- | IOAT_DMA_DCA_ANY_CPU,
- ioat_chan->reg_base + IOAT_DCACTRL_OFFSET);
- }
+ if (ioat_chan->device->version == IOAT_VER_2_0)
+ writel(IOAT_DCACTRL_CMPL_WRITE_ENABLE |
+ IOAT_DMA_DCA_ANY_CPU,
+ ioat_chan->reg_base + IOAT_DCACTRL_OFFSET);
+ else if (ioat_chan->device->version == IOAT_VER_3_0)
+ writel(IOAT_DMA_DCA_ANY_CPU,
+ ioat_chan->reg_base + IOAT_DCACTRL_OFFSET);
spin_lock_init(&ioat_chan->cleanup_lock);
spin_lock_init(&ioat_chan->desc_lock);
INIT_LIST_HEAD(&ioat_chan->free_desc);
* up if the client is done with the descriptor
*/
if (async_tx_test_ack(&desc->async_tx)) {
- list_del(&desc->node);
- list_add_tail(&desc->node,
- &ioat_chan->free_desc);
+ list_move_tail(&desc->node,
+ &ioat_chan->free_desc);
} else
desc->async_tx.cookie = 0;
} else {
dma_cookie_t cookie;
int err = 0;
struct completion cmp;
+ unsigned long tmo;
src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL);
if (!src)
}
device->common.device_issue_pending(dma_chan);
- wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
+ tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
- if (device->common.device_is_tx_complete(dma_chan, cookie, NULL, NULL)
+ if (tmo == 0 ||
+ device->common.device_is_tx_complete(dma_chan, cookie, NULL, NULL)
!= DMA_SUCCESS) {
dev_err(&device->pdev->dev,
"Self-test copy timed out, disabling\n");
" %d channels, device version 0x%02x, driver version %s\n",
device->common.chancnt, device->version, IOAT_DMA_VERSION);
+ if (!device->common.chancnt) {
+ dev_err(&device->pdev->dev,
+ "Intel(R) I/OAT DMA Engine problem found: "
+ "zero channels detected\n");
+ goto err_setup_interrupts;
+ }
+
err = ioat_dma_setup_interrupts(device);
if (err)
goto err_setup_interrupts;
struct dma_chan *chan, *_chan;
struct ioat_dma_chan *ioat_chan;
+ if (device->version != IOAT_VER_3_0)
+ cancel_delayed_work(&device->work);
+
ioat_dma_remove_interrupts(device);
dma_async_device_unregister(&device->common);
pci_release_regions(device->pdev);
pci_disable_device(device->pdev);
- if (device->version != IOAT_VER_3_0) {
- cancel_delayed_work(&device->work);
- }
-
list_for_each_entry_safe(chan, _chan,
&device->common.channels, device_node) {
ioat_chan = to_ioat_chan(chan);
/*
- * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved.
+ * Copyright(c) 2004 - 2009 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
#include <linux/pci_ids.h>
#include <net/tcp.h>
-#define IOAT_DMA_VERSION "3.30"
+#define IOAT_DMA_VERSION "3.64"
enum ioat_interrupt {
none = 0,
#ifdef CONFIG_NET_DMA
switch (dev->version) {
case IOAT_VER_1_2:
- case IOAT_VER_3_0:
sysctl_tcp_dma_copybreak = 4096;
break;
case IOAT_VER_2_0:
sysctl_tcp_dma_copybreak = 2048;
break;
+ case IOAT_VER_3_0:
+ sysctl_tcp_dma_copybreak = 262144;
+ break;
}
#endif
}
/*
- * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved.
+ * Copyright(c) 2004 - 2009 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
/*
- * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved.
+ * Copyright(c) 2004 - 2009 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
for (src_idx = 0; src_idx < IOP_ADMA_NUM_SRC_TEST; src_idx++) {
xor_srcs[src_idx] = alloc_page(GFP_KERNEL);
- if (!xor_srcs[src_idx])
- while (src_idx--) {
+ if (!xor_srcs[src_idx]) {
+ while (src_idx--)
__free_page(xor_srcs[src_idx]);
- return -ENOMEM;
- }
+ return -ENOMEM;
+ }
}
dest = alloc_page(GFP_KERNEL);
- if (!dest)
- while (src_idx--) {
+ if (!dest) {
+ while (src_idx--)
__free_page(xor_srcs[src_idx]);
- return -ENOMEM;
- }
+ return -ENOMEM;
+ }
/* Fill in src buffers */
for (src_idx = 0; src_idx < IOP_ADMA_NUM_SRC_TEST; src_idx++) {
static struct platform_driver iop_adma_driver = {
.probe = iop_adma_probe,
- .remove = iop_adma_remove,
+ .remove = __devexit_p(iop_adma_remove),
.driver = {
.owner = THIS_MODULE,
.name = "iop-adma",
ichan->status = IPU_CHANNEL_READY;
- spin_unlock_irqrestore(ipu->lock, flags);
+ spin_unlock_irqrestore(&ipu->lock, flags);
return 0;
}
for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) {
xor_srcs[src_idx] = alloc_page(GFP_KERNEL);
- if (!xor_srcs[src_idx])
- while (src_idx--) {
+ if (!xor_srcs[src_idx]) {
+ while (src_idx--)
__free_page(xor_srcs[src_idx]);
- return -ENOMEM;
- }
+ return -ENOMEM;
+ }
}
dest = alloc_page(GFP_KERNEL);
- if (!dest)
- while (src_idx--) {
+ if (!dest) {
+ while (src_idx--)
__free_page(xor_srcs[src_idx]);
- return -ENOMEM;
- }
+ return -ENOMEM;
+ }
/* Fill in src buffers */
for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) {
static struct platform_driver mv_xor_driver = {
.probe = mv_xor_probe,
- .remove = mv_xor_remove,
+ .remove = __devexit_p(mv_xor_remove),
.driver = {
.owner = THIS_MODULE,
.name = MV_XOR_NAME,
dev->sigdata.lock = NULL;
master->lock.hw_lock = NULL; /* SHM removed */
master->lock.file_priv = NULL;
- wake_up_interruptible(&master->lock.lock_queue);
+ wake_up_interruptible_all(&master->lock.lock_queue);
}
break;
case _DRM_AGP:
mutex_lock(&dev->struct_mutex);
if (file_priv->is_master) {
+ struct drm_master *master = file_priv->master;
struct drm_file *temp;
list_for_each_entry(temp, &dev->filelist, lhead) {
if ((temp->master == file_priv->master) &&
temp->authenticated = 0;
}
+ /**
+ * Since the master is disappearing, so is the
+ * possibility to lock.
+ */
+
+ if (master->lock.hw_lock) {
+ if (dev->sigdata.lock == master->lock.hw_lock)
+ dev->sigdata.lock = NULL;
+ master->lock.hw_lock = NULL;
+ master->lock.file_priv = NULL;
+ wake_up_interruptible_all(&master->lock.lock_queue);
+ }
+
if (file_priv->minor->master == file_priv->master) {
/* drop the reference held my the minor */
drm_master_put(&file_priv->minor->master);
__set_current_state(TASK_INTERRUPTIBLE);
if (!master->lock.hw_lock) {
/* Device has been unregistered */
+ send_sig(SIGTERM, current, 0);
ret = -EINTR;
break;
}
/* Contention */
schedule();
if (signal_pending(current)) {
- ret = -ERESTARTSYS;
+ ret = -EINTR;
break;
}
}
drm_ht_remove(&master->magiclist);
- if (master->lock.hw_lock) {
- if (dev->sigdata.lock == master->lock.hw_lock)
- dev->sigdata.lock = NULL;
- master->lock.hw_lock = NULL;
- master->lock.file_priv = NULL;
- wake_up_interruptible(&master->lock.lock_queue);
- }
-
drm_free(master, sizeof(*master), DRM_MEM_DRIVER);
}
file_priv->minor->master != file_priv->master) {
mutex_lock(&dev->struct_mutex);
file_priv->minor->master = drm_master_get(file_priv->master);
- mutex_lock(&dev->struct_mutex);
+ mutex_unlock(&dev->struct_mutex);
}
return 0;
vaddr_atomic = io_mapping_map_atomic_wc(mapping, page_base);
unwritten = __copy_from_user_inatomic_nocache(vaddr_atomic + page_offset,
- user_data, length, length);
+ user_data, length);
io_mapping_unmap_atomic(vaddr_atomic);
if (unwritten)
return -EFAULT;
drm_i915_irq_emit_t *emit = data;
int result;
- RING_LOCK_TEST_WITH_RETURN(dev, file_priv);
-
if (!dev_priv) {
DRM_ERROR("called with no initialization\n");
return -EINVAL;
}
+
+ RING_LOCK_TEST_WITH_RETURN(dev, file_priv);
+
mutex_lock(&dev->struct_mutex);
result = i915_emit_irq(dev);
mutex_unlock(&dev->struct_mutex);
#define LM85_COMPANY_SMSC 0x5c
#define LM85_VERSTEP_VMASK 0xf0
#define LM85_VERSTEP_GENERIC 0x60
+#define LM85_VERSTEP_GENERIC2 0x70
#define LM85_VERSTEP_LM85C 0x60
#define LM85_VERSTEP_LM85B 0x62
#define LM85_VERSTEP_ADM1027 0x60
static const struct i2c_device_id lm85_id[] = {
{ "adm1027", adm1027 },
{ "adt7463", adt7463 },
+ { "adt7468", adt7468 },
{ "lm85", any_chip },
{ "lm85b", lm85b },
{ "lm85c", lm85c },
struct lm85_data *data = lm85_update_device(dev);
int vid;
- if (data->type == adt7463 && (data->vid & 0x80)) {
+ if ((data->type == adt7463 || data->type == adt7468) &&
+ (data->vid & 0x80)) {
/* 6-pin VID (VRM 10) */
vid = vid_from_reg(data->vid & 0x3f, data->vrm);
} else {
address, company, verstep);
/* All supported chips have the version in common */
- if ((verstep & LM85_VERSTEP_VMASK) != LM85_VERSTEP_GENERIC) {
+ if ((verstep & LM85_VERSTEP_VMASK) != LM85_VERSTEP_GENERIC &&
+ (verstep & LM85_VERSTEP_VMASK) != LM85_VERSTEP_GENERIC2) {
dev_dbg(&adapter->dev, "Autodetection failed: "
"unsupported version\n");
return -ENODEV;
return 0;
}
-static void __devexit
+static void
mv64xxx_i2c_unmap_regs(struct mv64xxx_i2c_data *drv_data)
{
if (drv_data->reg_base) {
static struct platform_driver mv64xxx_i2c_driver = {
.probe = mv64xxx_i2c_probe,
- .remove = mv64xxx_i2c_remove,
+ .remove = __devexit_p(mv64xxx_i2c_remove),
.driver = {
.owner = THIS_MODULE,
.name = MV64XXX_I2C_CTLR_NAME,
depends on SOC_TX4939
select BLK_DEV_IDEDMA_SFF
+config BLK_DEV_IDE_AT91
+ tristate "Atmel AT91 (SAM9, CAP9, AT572D940HF) IDE support"
+ depends on ARM && ARCH_AT91 && !ARCH_AT91RM9200 && !ARCH_AT91X40
+ select IDE_TIMINGS
+
config IDE_ARM
tristate "ARM IDE support"
depends on ARM && (ARCH_RPC || ARCH_SHARK)
obj-$(CONFIG_BLK_DEV_IDE_TX4938) += tx4938ide.o
obj-$(CONFIG_BLK_DEV_IDE_TX4939) += tx4939ide.o
+obj-$(CONFIG_BLK_DEV_IDE_AT91) += at91_ide.o
--- /dev/null
+/*
+ * IDE host driver for AT91 (SAM9, CAP9, AT572D940HF) Static Memory Controller
+ * with Compact Flash True IDE logic
+ *
+ * Copyright (c) 2008, 2009 Kelvatek Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#include <linux/version.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/ide.h>
+#include <linux/platform_device.h>
+
+#include <mach/board.h>
+#include <mach/gpio.h>
+#include <mach/at91sam9263.h>
+#include <mach/at91sam9_smc.h>
+#include <mach/at91sam9263_matrix.h>
+
+#define DRV_NAME "at91_ide"
+
+#define perr(fmt, args...) pr_err(DRV_NAME ": " fmt, ##args)
+#define pdbg(fmt, args...) pr_debug("%s " fmt, __func__, ##args)
+
+/*
+ * Access to IDE device is possible through EBI Static Memory Controller
+ * with Compact Flash logic. For details see EBI and SMC datasheet sections
+ * of any microcontroller from AT91SAM9 family.
+ *
+ * Within SMC chip select address space, lines A[23:21] distinguish Compact
+ * Flash modes (I/O, common memory, attribute memory, True IDE). IDE modes are:
+ * 0x00c0000 - True IDE
+ * 0x00e0000 - Alternate True IDE (Alt Status Register)
+ *
+ * On True IDE mode Task File and Data Register are mapped at the same address.
+ * To distinguish access between these two different bus data width is used:
+ * 8Bit for Task File, 16Bit for Data I/O.
+ *
+ * After initialization we do 8/16 bit flipping (changes in SMC MODE register)
+ * only inside IDE callback routines which are serialized by IDE layer,
+ * so no additional locking needed.
+ */
+
+#define TASK_FILE 0x00c00000
+#define ALT_MODE 0x00e00000
+#define REGS_SIZE 8
+
+#define enter_16bit(cs, mode) do { \
+ mode = at91_sys_read(AT91_SMC_MODE(cs)); \
+ at91_sys_write(AT91_SMC_MODE(cs), mode | AT91_SMC_DBW_16); \
+} while (0)
+
+#define leave_16bit(cs, mode) at91_sys_write(AT91_SMC_MODE(cs), mode);
+
+static void set_smc_timings(const u8 chipselect, const u16 cycle,
+ const u16 setup, const u16 pulse,
+ const u16 data_float, int use_iordy)
+{
+ unsigned long mode = AT91_SMC_READMODE | AT91_SMC_WRITEMODE |
+ AT91_SMC_BAT_SELECT;
+
+ /* disable or enable waiting for IORDY signal */
+ if (use_iordy)
+ mode |= AT91_SMC_EXNWMODE_READY;
+
+ /* add data float cycles if needed */
+ if (data_float)
+ mode |= AT91_SMC_TDF_(data_float);
+
+ at91_sys_write(AT91_SMC_MODE(chipselect), mode);
+
+ /* setup timings in SMC */
+ at91_sys_write(AT91_SMC_SETUP(chipselect), AT91_SMC_NWESETUP_(setup) |
+ AT91_SMC_NCS_WRSETUP_(0) |
+ AT91_SMC_NRDSETUP_(setup) |
+ AT91_SMC_NCS_RDSETUP_(0));
+ at91_sys_write(AT91_SMC_PULSE(chipselect), AT91_SMC_NWEPULSE_(pulse) |
+ AT91_SMC_NCS_WRPULSE_(cycle) |
+ AT91_SMC_NRDPULSE_(pulse) |
+ AT91_SMC_NCS_RDPULSE_(cycle));
+ at91_sys_write(AT91_SMC_CYCLE(chipselect), AT91_SMC_NWECYCLE_(cycle) |
+ AT91_SMC_NRDCYCLE_(cycle));
+}
+
+static unsigned int calc_mck_cycles(unsigned int ns, unsigned int mck_hz)
+{
+ u64 tmp = ns;
+
+ tmp *= mck_hz;
+ tmp += 1000*1000*1000 - 1; /* round up */
+ do_div(tmp, 1000*1000*1000);
+ return (unsigned int) tmp;
+}
+
+static void apply_timings(const u8 chipselect, const u8 pio,
+ const struct ide_timing *timing, int use_iordy)
+{
+ unsigned int t0, t1, t2, t6z;
+ unsigned int cycle, setup, pulse, data_float;
+ unsigned int mck_hz;
+ struct clk *mck;
+
+ /* see table 22 of Compact Flash standard 4.1 for the meaning,
+ * we do not stretch active (t2) time, so setup (t1) + hold time (th)
+ * assure at least minimal recovery (t2i) time */
+ t0 = timing->cyc8b;
+ t1 = timing->setup;
+ t2 = timing->act8b;
+ t6z = (pio < 5) ? 30 : 20;
+
+ pdbg("t0=%u t1=%u t2=%u t6z=%u\n", t0, t1, t2, t6z);
+
+ mck = clk_get(NULL, "mck");
+ BUG_ON(IS_ERR(mck));
+ mck_hz = clk_get_rate(mck);
+ pdbg("mck_hz=%u\n", mck_hz);
+
+ cycle = calc_mck_cycles(t0, mck_hz);
+ setup = calc_mck_cycles(t1, mck_hz);
+ pulse = calc_mck_cycles(t2, mck_hz);
+ data_float = calc_mck_cycles(t6z, mck_hz);
+
+ pdbg("cycle=%u setup=%u pulse=%u data_float=%u\n",
+ cycle, setup, pulse, data_float);
+
+ set_smc_timings(chipselect, cycle, setup, pulse, data_float, use_iordy);
+}
+
+static void at91_ide_input_data(ide_drive_t *drive, struct request *rq,
+ void *buf, unsigned int len)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct ide_io_ports *io_ports = &hwif->io_ports;
+ u8 chipselect = hwif->select_data;
+ unsigned long mode;
+
+ pdbg("cs %u buf %p len %d\n", chipselect, buf, len);
+
+ len++;
+
+ enter_16bit(chipselect, mode);
+ __ide_mm_insw((void __iomem *) io_ports->data_addr, buf, len / 2);
+ leave_16bit(chipselect, mode);
+}
+
+static void at91_ide_output_data(ide_drive_t *drive, struct request *rq,
+ void *buf, unsigned int len)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct ide_io_ports *io_ports = &hwif->io_ports;
+ u8 chipselect = hwif->select_data;
+ unsigned long mode;
+
+ pdbg("cs %u buf %p len %d\n", chipselect, buf, len);
+
+ enter_16bit(chipselect, mode);
+ __ide_mm_outsw((void __iomem *) io_ports->data_addr, buf, len / 2);
+ leave_16bit(chipselect, mode);
+}
+
+static u8 ide_mm_inb(unsigned long port)
+{
+ return readb((void __iomem *) port);
+}
+
+static void ide_mm_outb(u8 value, unsigned long port)
+{
+ writeb(value, (void __iomem *) port);
+}
+
+static void at91_ide_tf_load(ide_drive_t *drive, ide_task_t *task)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct ide_io_ports *io_ports = &hwif->io_ports;
+ struct ide_taskfile *tf = &task->tf;
+ u8 HIHI = (task->tf_flags & IDE_TFLAG_LBA48) ? 0xE0 : 0xEF;
+
+ if (task->tf_flags & IDE_TFLAG_FLAGGED)
+ HIHI = 0xFF;
+
+ if (task->tf_flags & IDE_TFLAG_OUT_DATA) {
+ u16 data = (tf->hob_data << 8) | tf->data;
+
+ at91_ide_output_data(drive, NULL, &data, 2);
+ }
+
+ if (task->tf_flags & IDE_TFLAG_OUT_HOB_FEATURE)
+ ide_mm_outb(tf->hob_feature, io_ports->feature_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_HOB_NSECT)
+ ide_mm_outb(tf->hob_nsect, io_ports->nsect_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_HOB_LBAL)
+ ide_mm_outb(tf->hob_lbal, io_ports->lbal_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_HOB_LBAM)
+ ide_mm_outb(tf->hob_lbam, io_ports->lbam_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_HOB_LBAH)
+ ide_mm_outb(tf->hob_lbah, io_ports->lbah_addr);
+
+ if (task->tf_flags & IDE_TFLAG_OUT_FEATURE)
+ ide_mm_outb(tf->feature, io_ports->feature_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_NSECT)
+ ide_mm_outb(tf->nsect, io_ports->nsect_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_LBAL)
+ ide_mm_outb(tf->lbal, io_ports->lbal_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_LBAM)
+ ide_mm_outb(tf->lbam, io_ports->lbam_addr);
+ if (task->tf_flags & IDE_TFLAG_OUT_LBAH)
+ ide_mm_outb(tf->lbah, io_ports->lbah_addr);
+
+ if (task->tf_flags & IDE_TFLAG_OUT_DEVICE)
+ ide_mm_outb((tf->device & HIHI) | drive->select, io_ports->device_addr);
+}
+
+static void at91_ide_tf_read(ide_drive_t *drive, ide_task_t *task)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct ide_io_ports *io_ports = &hwif->io_ports;
+ struct ide_taskfile *tf = &task->tf;
+
+ if (task->tf_flags & IDE_TFLAG_IN_DATA) {
+ u16 data;
+
+ at91_ide_input_data(drive, NULL, &data, 2);
+ tf->data = data & 0xff;
+ tf->hob_data = (data >> 8) & 0xff;
+ }
+
+ /* be sure we're looking at the low order bits */
+ ide_mm_outb(ATA_DEVCTL_OBS & ~0x80, io_ports->ctl_addr);
+
+ if (task->tf_flags & IDE_TFLAG_IN_FEATURE)
+ tf->feature = ide_mm_inb(io_ports->feature_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_NSECT)
+ tf->nsect = ide_mm_inb(io_ports->nsect_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_LBAL)
+ tf->lbal = ide_mm_inb(io_ports->lbal_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_LBAM)
+ tf->lbam = ide_mm_inb(io_ports->lbam_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_LBAH)
+ tf->lbah = ide_mm_inb(io_ports->lbah_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_DEVICE)
+ tf->device = ide_mm_inb(io_ports->device_addr);
+
+ if (task->tf_flags & IDE_TFLAG_LBA48) {
+ ide_mm_outb(ATA_DEVCTL_OBS | 0x80, io_ports->ctl_addr);
+
+ if (task->tf_flags & IDE_TFLAG_IN_HOB_FEATURE)
+ tf->hob_feature = ide_mm_inb(io_ports->feature_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_HOB_NSECT)
+ tf->hob_nsect = ide_mm_inb(io_ports->nsect_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_HOB_LBAL)
+ tf->hob_lbal = ide_mm_inb(io_ports->lbal_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_HOB_LBAM)
+ tf->hob_lbam = ide_mm_inb(io_ports->lbam_addr);
+ if (task->tf_flags & IDE_TFLAG_IN_HOB_LBAH)
+ tf->hob_lbah = ide_mm_inb(io_ports->lbah_addr);
+ }
+}
+
+static void at91_ide_set_pio_mode(ide_drive_t *drive, const u8 pio)
+{
+ struct ide_timing *timing;
+ u8 chipselect = drive->hwif->select_data;
+ int use_iordy = 0;
+
+ pdbg("chipselect %u pio %u\n", chipselect, pio);
+
+ timing = ide_timing_find_mode(XFER_PIO_0 + pio);
+ BUG_ON(!timing);
+
+ if ((pio > 2 || ata_id_has_iordy(drive->id)) &&
+ !(ata_id_is_cfa(drive->id) && pio > 4))
+ use_iordy = 1;
+
+ apply_timings(chipselect, pio, timing, use_iordy);
+}
+
+static const struct ide_tp_ops at91_ide_tp_ops = {
+ .exec_command = ide_exec_command,
+ .read_status = ide_read_status,
+ .read_altstatus = ide_read_altstatus,
+ .set_irq = ide_set_irq,
+
+ .tf_load = at91_ide_tf_load,
+ .tf_read = at91_ide_tf_read,
+
+ .input_data = at91_ide_input_data,
+ .output_data = at91_ide_output_data,
+};
+
+static const struct ide_port_ops at91_ide_port_ops = {
+ .set_pio_mode = at91_ide_set_pio_mode,
+};
+
+static const struct ide_port_info at91_ide_port_info __initdata = {
+ .port_ops = &at91_ide_port_ops,
+ .tp_ops = &at91_ide_tp_ops,
+ .host_flags = IDE_HFLAG_MMIO | IDE_HFLAG_NO_DMA | IDE_HFLAG_SINGLE |
+ IDE_HFLAG_NO_IO_32BIT | IDE_HFLAG_UNMASK_IRQS,
+ .pio_mask = ATA_PIO5,
+};
+
+/*
+ * If interrupt is delivered through GPIO, IRQ are triggered on falling
+ * and rising edge of signal. Whereas IDE device request interrupt on high
+ * level (rising edge in our case). This mean we have fake interrupts, so
+ * we need to check interrupt pin and exit instantly from ISR when line
+ * is on low level.
+ */
+
+irqreturn_t at91_irq_handler(int irq, void *dev_id)
+{
+ int ntries = 8;
+ int pin_val1, pin_val2;
+
+ /* additional deglitch, line can be noisy in badly designed PCB */
+ do {
+ pin_val1 = at91_get_gpio_value(irq);
+ pin_val2 = at91_get_gpio_value(irq);
+ } while (pin_val1 != pin_val2 && --ntries > 0);
+
+ if (pin_val1 == 0 || ntries <= 0)
+ return IRQ_HANDLED;
+
+ return ide_intr(irq, dev_id);
+}
+
+static int __init at91_ide_probe(struct platform_device *pdev)
+{
+ int ret;
+ hw_regs_t hw;
+ hw_regs_t *hws[] = { &hw, NULL, NULL, NULL };
+ struct ide_host *host;
+ struct resource *res;
+ unsigned long tf_base = 0, ctl_base = 0;
+ struct at91_cf_data *board = pdev->dev.platform_data;
+
+ if (!board)
+ return -ENODEV;
+
+ if (board->det_pin && at91_get_gpio_value(board->det_pin) != 0) {
+ perr("no device detected\n");
+ return -ENODEV;
+ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ perr("can't get memory resource\n");
+ return -ENODEV;
+ }
+
+ if (!devm_request_mem_region(&pdev->dev, res->start + TASK_FILE,
+ REGS_SIZE, "ide") ||
+ !devm_request_mem_region(&pdev->dev, res->start + ALT_MODE,
+ REGS_SIZE, "alt")) {
+ perr("memory resources in use\n");
+ return -EBUSY;
+ }
+
+ pdbg("chipselect %u irq %u res %08lx\n", board->chipselect,
+ board->irq_pin, (unsigned long) res->start);
+
+ tf_base = (unsigned long) devm_ioremap(&pdev->dev, res->start + TASK_FILE,
+ REGS_SIZE);
+ ctl_base = (unsigned long) devm_ioremap(&pdev->dev, res->start + ALT_MODE,
+ REGS_SIZE);
+ if (!tf_base || !ctl_base) {
+ perr("can't map memory regions\n");
+ return -EBUSY;
+ }
+
+ memset(&hw, 0, sizeof(hw));
+
+ if (board->flags & AT91_IDE_SWAP_A0_A2) {
+ /* workaround for stupid hardware bug */
+ hw.io_ports.data_addr = tf_base + 0;
+ hw.io_ports.error_addr = tf_base + 4;
+ hw.io_ports.nsect_addr = tf_base + 2;
+ hw.io_ports.lbal_addr = tf_base + 6;
+ hw.io_ports.lbam_addr = tf_base + 1;
+ hw.io_ports.lbah_addr = tf_base + 5;
+ hw.io_ports.device_addr = tf_base + 3;
+ hw.io_ports.command_addr = tf_base + 7;
+ hw.io_ports.ctl_addr = ctl_base + 3;
+ } else
+ ide_std_init_ports(&hw, tf_base, ctl_base + 6);
+
+ hw.irq = board->irq_pin;
+ hw.chipset = ide_generic;
+ hw.dev = &pdev->dev;
+
+ host = ide_host_alloc(&at91_ide_port_info, hws);
+ if (!host) {
+ perr("failed to allocate ide host\n");
+ return -ENOMEM;
+ }
+
+ /* setup Static Memory Controller - PIO 0 as default */
+ apply_timings(board->chipselect, 0, ide_timing_find_mode(XFER_PIO_0), 0);
+
+ /* with GPIO interrupt we have to do quirks in handler */
+ if (board->irq_pin >= PIN_BASE)
+ host->irq_handler = at91_irq_handler;
+
+ host->ports[0]->select_data = board->chipselect;
+
+ ret = ide_host_register(host, &at91_ide_port_info, hws);
+ if (ret) {
+ perr("failed to register ide host\n");
+ goto err_free_host;
+ }
+ platform_set_drvdata(pdev, host);
+ return 0;
+
+err_free_host:
+ ide_host_free(host);
+ return ret;
+}
+
+static int __exit at91_ide_remove(struct platform_device *pdev)
+{
+ struct ide_host *host = platform_get_drvdata(pdev);
+
+ ide_host_remove(host);
+ return 0;
+}
+
+static struct platform_driver at91_ide_driver = {
+ .driver = {
+ .name = DRV_NAME,
+ .owner = THIS_MODULE,
+ },
+ .remove = __exit_p(at91_ide_remove),
+};
+
+static int __init at91_ide_init(void)
+{
+ return platform_driver_probe(&at91_ide_driver, at91_ide_probe);
+}
+
+static void __exit at91_ide_exit(void)
+{
+ platform_driver_unregister(&at91_ide_driver);
+}
+
+module_init(at91_ide_init);
+module_exit(at91_ide_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Stanislaw Gruszka <stf_xl@wp.pl>");
+
IDE_PROC_DEVSET(multcount, 0, 16),
IDE_PROC_DEVSET(nowerr, 0, 1),
IDE_PROC_DEVSET(wcache, 0, 1),
- { 0 },
+ { NULL },
};
IDE_PROC_DEVSET(bios_head, 0, 255),
IDE_PROC_DEVSET(bios_sect, 0, 63),
IDE_PROC_DEVSET(ticks, 0, 255),
- { 0 },
+ { NULL },
};
ide_drive_t *uninitialized_var(drive);
ide_handler_t *handler;
unsigned long flags;
- unsigned long wait = -1;
+ int wait = -1;
int plug_device = 0;
spin_lock_irqsave(&hwif->lock, flags);
return irq_ret;
}
+EXPORT_SYMBOL_GPL(ide_intr);
/**
* ide_do_drive_cmd - issue IDE special command
u8 io_32bit = drive->io_32bit;
u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0;
+ len++;
+
if (io_32bit) {
unsigned long uninitialized_var(flags);
static int init_irq (ide_hwif_t *hwif)
{
struct ide_io_ports *io_ports = &hwif->io_ports;
+ irq_handler_t irq_handler;
int sa = 0;
mutex_lock(&ide_cfg_mtx);
hwif->timer.function = &ide_timer_expiry;
hwif->timer.data = (unsigned long)hwif;
+ irq_handler = hwif->host->irq_handler;
+ if (irq_handler == NULL)
+ irq_handler = ide_intr;
+
#if defined(__mc68000__)
sa = IRQF_SHARED;
#endif /* __mc68000__ */
if (io_ports->ctl_addr)
hwif->tp_ops->set_irq(hwif, 1);
- if (request_irq(hwif->irq, &ide_intr, sa, hwif->name, hwif))
+ if (request_irq(hwif->irq, irq_handler, sa, hwif->name, hwif))
goto out_up;
if (!hwif->rqsize) {
IDE_PROC_DEVSET(pio_mode, 0, 255),
IDE_PROC_DEVSET(unmaskirq, 0, 1),
IDE_PROC_DEVSET(using_dma, 0, 1),
- { 0 },
+ { NULL },
};
static void proc_ide_settings_warn(void)
__IDE_PROC_DEVSET(speed, 0, 0xffff, NULL, NULL),
__IDE_PROC_DEVSET(tdsc, IDETAPE_DSC_RW_MIN, IDETAPE_DSC_RW_MAX,
mulf_tdsc, divf_tdsc),
- { 0 },
+ { NULL },
};
#endif
*/
static void atkbd_dell_laptop_keymap_fixup(struct atkbd *atkbd)
{
- const unsigned int forced_release_keys[] = {
+ static const unsigned int forced_release_keys[] = {
0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8f, 0x93,
};
int i;
*/
static void atkbd_hp_keymap_fixup(struct atkbd *atkbd)
{
- const unsigned int forced_release_keys[] = {
+ static const unsigned int forced_release_keys[] = {
0x94,
};
int i;
goto out;
}
- if (!pdata->debounce_time || !pdata->debounce_time > MAX_MULT ||
- !pdata->coldrive_time || !pdata->coldrive_time > MAX_MULT) {
+ if (!pdata->debounce_time || pdata->debounce_time > MAX_MULT ||
+ !pdata->coldrive_time || pdata->coldrive_time > MAX_MULT) {
printk(KERN_ERR DRV_NAME
": Invalid Debounce/Columdrive Time from pdata\n");
bfin_write_KPAD_MSEL(0xFF0); /* Default MSEL */
#define corgikbd_resume NULL
#endif
-static int __init corgikbd_probe(struct platform_device *pdev)
+static int __devinit corgikbd_probe(struct platform_device *pdev)
{
struct corgikbd *corgikbd;
struct input_dev *input_dev;
return err;
}
-static int corgikbd_remove(struct platform_device *pdev)
+static int __devexit corgikbd_remove(struct platform_device *pdev)
{
int i;
struct corgikbd *corgikbd = platform_get_drvdata(pdev);
static struct platform_driver corgikbd_driver = {
.probe = corgikbd_probe,
- .remove = corgikbd_remove,
+ .remove = __devexit_p(corgikbd_remove),
.suspend = corgikbd_suspend,
.resume = corgikbd_resume,
.driver = {
},
};
-static int __devinit corgikbd_init(void)
+static int __init corgikbd_init(void)
{
return platform_driver_register(&corgikbd_driver);
}
#define omap_kp_resume NULL
#endif
-static int __init omap_kp_probe(struct platform_device *pdev)
+static int __devinit omap_kp_probe(struct platform_device *pdev)
{
struct omap_kp *omap_kp;
struct input_dev *input_dev;
return -EINVAL;
}
-static int omap_kp_remove(struct platform_device *pdev)
+static int __devexit omap_kp_remove(struct platform_device *pdev)
{
struct omap_kp *omap_kp = platform_get_drvdata(pdev);
static struct platform_driver omap_kp_driver = {
.probe = omap_kp_probe,
- .remove = omap_kp_remove,
+ .remove = __devexit_p(omap_kp_remove),
.suspend = omap_kp_suspend,
.resume = omap_kp_resume,
.driver = {
},
};
-static int __devinit omap_kp_init(void)
+static int __init omap_kp_init(void)
{
printk(KERN_INFO "OMAP Keypad Driver\n");
return platform_driver_register(&omap_kp_driver);
#define spitzkbd_resume NULL
#endif
-static int __init spitzkbd_probe(struct platform_device *dev)
+static int __devinit spitzkbd_probe(struct platform_device *dev)
{
struct spitzkbd *spitzkbd;
struct input_dev *input_dev;
return err;
}
-static int spitzkbd_remove(struct platform_device *dev)
+static int __devexit spitzkbd_remove(struct platform_device *dev)
{
int i;
struct spitzkbd *spitzkbd = platform_get_drvdata(dev);
static struct platform_driver spitzkbd_driver = {
.probe = spitzkbd_probe,
- .remove = spitzkbd_remove,
+ .remove = __devexit_p(spitzkbd_remove),
.suspend = spitzkbd_suspend,
.resume = spitzkbd_resume,
.driver = {
},
};
-static int __devinit spitzkbd_init(void)
+static int __init spitzkbd_init(void)
{
return platform_driver_register(&spitzkbd_driver);
}
config MOUSE_PS2_LIFEBOOK
bool "Fujitsu Lifebook PS/2 mouse protocol extension" if EMBEDDED
default y
- depends on MOUSE_PS2
+ depends on MOUSE_PS2 && X86
help
Say Y here if you have a Fujitsu B-series Lifebook PS/2
TouchScreen connected to your system.
ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO)) {
- pr_err("elantech.c: sending Elantech magic knock failed.\n");
+ pr_debug("elantech.c: sending Elantech magic knock failed.\n");
return -1;
}
* set of magic numbers
*/
if (param[0] != 0x3c || param[1] != 0x03 || param[2] != 0xc8) {
- pr_info("elantech.c: unexpected magic knock result 0x%02x, 0x%02x, 0x%02x.\n",
- param[0], param[1], param[2]);
+ pr_debug("elantech.c: "
+ "unexpected magic knock result 0x%02x, 0x%02x, 0x%02x.\n",
+ param[0], param[1], param[2]);
+ return -1;
+ }
+
+ /*
+ * Query touchpad's firmware version and see if it reports known
+ * value to avoid mis-detection. Logitech mice are known to respond
+ * to Elantech magic knock and there might be more.
+ */
+ if (synaptics_send_cmd(psmouse, ETP_FW_VERSION_QUERY, param)) {
+ pr_debug("elantech.c: failed to query firmware version.\n");
+ return -1;
+ }
+
+ pr_debug("elantech.c: Elantech version query result 0x%02x, 0x%02x, 0x%02x.\n",
+ param[0], param[1], param[2]);
+
+ if (param[0] == 0 || param[1] != 0) {
+ pr_debug("elantech.c: Probably not a real Elantech touchpad. Aborting.\n");
return -1;
}
int i, error;
unsigned char param[3];
- etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL);
- psmouse->private = etd;
+ psmouse->private = etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL);
if (!etd)
return -1;
etd->parity[i] = etd->parity[i & (i - 1)] ^ 1;
/*
- * Find out what version hardware this is
+ * Do the version query again so we can store the result
*/
if (synaptics_send_cmd(psmouse, ETP_FW_VERSION_QUERY, param)) {
pr_err("elantech.c: failed to query firmware version.\n");
goto init_fail;
}
- pr_info("elantech.c: Elantech version query result 0x%02x, 0x%02x, 0x%02x.\n",
- param[0], param[1], param[2]);
etd->fw_version_maj = param[0];
etd->fw_version_min = param[2];
__raw_writel(v, trkball->mmio_base + TBCR);
- while (i--) {
+ while (--i) {
if (__raw_readl(trkball->mmio_base + TBCR) == v)
break;
msleep(1);
static int synaptics_query_hardware(struct psmouse *psmouse)
{
- int retries = 0;
-
- while ((retries++ < 3) && psmouse_reset(psmouse))
- /* empty */;
-
if (synaptics_identify(psmouse))
return -1;
if (synaptics_model_id(psmouse))
struct synaptics_data *priv = psmouse->private;
struct synaptics_data old_priv = *priv;
+ psmouse_reset(psmouse);
+
if (synaptics_detect(psmouse, 0))
return -1;
if (!priv)
return -1;
+ psmouse_reset(psmouse);
+
if (synaptics_query_hardware(psmouse)) {
printk(KERN_ERR "Unable to query Synaptics hardware.\n");
goto init_fail;
struct amba_kmi_port *kmi = io->port_data;
unsigned int timeleft = 10000; /* timeout in 100ms */
- while ((readb(KMISTAT) & KMISTAT_TXEMPTY) == 0 && timeleft--)
+ while ((readb(KMISTAT) & KMISTAT_TXEMPTY) == 0 && --timeleft)
udelay(10);
if (timeleft)
io->write = amba_kmi_write;
io->open = amba_kmi_open;
io->close = amba_kmi_close;
- strlcpy(io->name, dev->dev.bus_id, sizeof(io->name));
- strlcpy(io->phys, dev->dev.bus_id, sizeof(io->phys));
+ strlcpy(io->name, dev_name(&dev->dev), sizeof(io->name));
+ strlcpy(io->phys, dev_name(&dev->dev), sizeof(io->phys));
io->port_data = kmi;
io->dev.parent = &dev->dev;
snprintf(serio->name, sizeof(serio->name), "GSC PS/2 %s",
(ps2port->id == GSC_ID_KEYBOARD) ? "keyboard" : "mouse");
- strlcpy(serio->phys, dev->dev.bus_id, sizeof(serio->phys));
+ strlcpy(serio->phys, dev_name(&dev->dev), sizeof(serio->phys));
serio->id.type = SERIO_8042;
serio->write = gscps2_write;
serio->open = gscps2_open;
serio->write = ps2_write;
serio->open = ps2_open;
serio->close = ps2_close;
- strlcpy(serio->name, dev->dev.bus_id, sizeof(serio->name));
- strlcpy(serio->phys, dev->dev.bus_id, sizeof(serio->phys));
+ strlcpy(serio->name, dev_name(&dev->dev), sizeof(serio->name));
+ strlcpy(serio->phys, dev_name(&dev->dev), sizeof(serio->phys));
serio->port_data = ps2if;
serio->dev.parent = &dev->dev;
ps2if->io = serio;
ts_dev->bufferedmeasure = 0;
snprintf(ts_dev->phys, sizeof(ts_dev->phys),
- "%s/input0", pdev->dev.bus_id);
+ "%s/input0", dev_name(&pdev->dev));
input_dev->name = "atmel touch screen controller";
input_dev->phys = ts_dev->phys;
#define corgits_resume NULL
#endif
-static int __init corgits_probe(struct platform_device *pdev)
+static int __devinit corgits_probe(struct platform_device *pdev)
{
struct corgi_ts *corgi_ts;
struct input_dev *input_dev;
return err;
}
-static int corgits_remove(struct platform_device *pdev)
+static int __devexit corgits_remove(struct platform_device *pdev)
{
struct corgi_ts *corgi_ts = platform_get_drvdata(pdev);
corgi_ts->machinfo->put_hsync();
input_unregister_device(corgi_ts->input);
kfree(corgi_ts);
+
return 0;
}
static struct platform_driver corgits_driver = {
.probe = corgits_probe,
- .remove = corgits_remove,
+ .remove = __devexit_p(corgits_remove),
.suspend = corgits_suspend,
.resume = corgits_resume,
.driver = {
},
};
-static int __devinit corgits_init(void)
+static int __init corgits_init(void)
{
return platform_driver_register(&corgits_driver);
}
pdata->init_platform_hw();
- snprintf(ts->phys, sizeof(ts->phys), "%s/input0", client->dev.bus_id);
+ snprintf(ts->phys, sizeof(ts->phys),
+ "%s/input0", dev_name(&client->dev));
input_dev->name = "TSC2007 Touchscreen";
input_dev->phys = ts->phys;
module_param(swap_xy, bool, 0644);
MODULE_PARM_DESC(swap_xy, "If set X and Y axes are swapped.");
+static int hwcalib_xy;
+module_param(hwcalib_xy, bool, 0644);
+MODULE_PARM_DESC(hwcalib_xy, "If set hw-calibrated X/Y are used if available");
+
/* device specifc data/functions */
struct usbtouch_usb;
struct usbtouch_device_info {
#define USB_DEVICE_HID_CLASS(vend, prod) \
.match_flags = USB_DEVICE_ID_MATCH_INT_CLASS \
+ | USB_DEVICE_ID_MATCH_INT_PROTOCOL \
| USB_DEVICE_ID_MATCH_DEVICE, \
.idVendor = (vend), \
.idProduct = (prod), \
static int mtouch_read_data(struct usbtouch_usb *dev, unsigned char *pkt)
{
- dev->x = (pkt[8] << 8) | pkt[7];
- dev->y = (pkt[10] << 8) | pkt[9];
+ if (hwcalib_xy) {
+ dev->x = (pkt[4] << 8) | pkt[3];
+ dev->y = 0xffff - ((pkt[6] << 8) | pkt[5]);
+ } else {
+ dev->x = (pkt[8] << 8) | pkt[7];
+ dev->y = (pkt[10] << 8) | pkt[9];
+ }
dev->touch = (pkt[2] & 0x40) ? 1 : 0;
return 1;
return ret;
}
+ /* Default min/max xy are the raw values, override if using hw-calib */
+ if (hwcalib_xy) {
+ input_set_abs_params(usbtouch->input, ABS_X, 0, 0xffff, 0, 0);
+ input_set_abs_params(usbtouch->input, ABS_Y, 0, 0xffff, 0, 0);
+ }
+
return 0;
}
#endif
hcall(LHCALL_NOTIFY, lvq->config.pfn << PAGE_SHIFT, 0, 0);
}
+/* An extern declaration inside a C file is bad form. Don't do it. */
+extern void lguest_setup_irq(unsigned int irq);
+
/* This routine finds the first virtqueue described in the configuration of
* this device and sets it up.
*
goto unmap;
}
+ /* Make sure the interrupt is allocated. */
+ lguest_setup_irq(lvq->config.irq);
+
/* Tell the interrupt for this virtqueue to go to the virtio_ring
* interrupt handler. */
/* FIXME: We used to have a flag for the Host to tell us we could use
return mddev;
}
-static void mddev_delayed_delete(struct work_struct *ws)
-{
- mddev_t *mddev = container_of(ws, mddev_t, del_work);
- kobject_del(&mddev->kobj);
- kobject_put(&mddev->kobj);
-}
+static void mddev_delayed_delete(struct work_struct *ws);
static void mddev_put(mddev_t *mddev)
{
int mdp_major = 0;
+static void mddev_delayed_delete(struct work_struct *ws)
+{
+ mddev_t *mddev = container_of(ws, mddev_t, del_work);
+
+ if (mddev->private == &md_redundancy_group) {
+ sysfs_remove_group(&mddev->kobj, &md_redundancy_group);
+ if (mddev->sysfs_action)
+ sysfs_put(mddev->sysfs_action);
+ mddev->sysfs_action = NULL;
+ mddev->private = NULL;
+ }
+ kobject_del(&mddev->kobj);
+ kobject_put(&mddev->kobj);
+}
+
static int md_alloc(dev_t dev, char *name)
{
static DEFINE_MUTEX(disks_mutex);
mddev->queue->merge_bvec_fn = NULL;
mddev->queue->unplug_fn = NULL;
mddev->queue->backing_dev_info.congested_fn = NULL;
- if (mddev->pers->sync_request) {
- sysfs_remove_group(&mddev->kobj, &md_redundancy_group);
- if (mddev->sysfs_action)
- sysfs_put(mddev->sysfs_action);
- mddev->sysfs_action = NULL;
- }
module_put(mddev->pers->owner);
+ if (mddev->pers->sync_request)
+ mddev->private = &md_redundancy_group;
mddev->pers = NULL;
/* tell userspace to handle 'inactive' */
sysfs_notify_dirent(mddev->sysfs_state);
usb_to_input_id(udev, &input->id);
input->dev.parent = &dev->intf->dev;
- set_bit(EV_KEY, input->evbit);
- set_bit(BTN_0, input->keybit);
+ __set_bit(EV_KEY, input->evbit);
+ __set_bit(KEY_CAMERA, input->keybit);
if ((ret = input_register_device(input)) < 0)
goto error;
static void uvc_input_report_key(struct uvc_device *dev, unsigned int code,
int value)
{
- if (dev->input)
+ if (dev->input) {
input_report_key(dev->input, code, value);
+ input_sync(dev->input);
+ }
}
#else
return;
uvc_trace(UVC_TRACE_STATUS, "Button (intf %u) %s len %d\n",
data[1], data[3] ? "pressed" : "released", len);
- uvc_input_report_key(dev, BTN_0, data[3]);
+ uvc_input_report_key(dev, KEY_CAMERA, data[3]);
} else {
uvc_trace(UVC_TRACE_STATUS, "Stream %u error event %02x %02x "
"len %d.\n", data[1], data[2], data[3], len);
controllers (default=0)");
static int mpt_msi_enable_sas;
-module_param(mpt_msi_enable_sas, int, 1);
+module_param(mpt_msi_enable_sas, int, 0);
MODULE_PARM_DESC(mpt_msi_enable_sas, " Enable MSI Support for SAS \
- controllers (default=1)");
+ controllers (default=0)");
static int mpt_channel_mapping;
sg_init_one(&sg, data_buf, len);
- /*
- * The spec states that CSR and CID accesses have a timeout
- * of 64 clock cycles.
- */
- data.timeout_ns = 0;
- data.timeout_clks = 64;
+ if (opcode == MMC_SEND_CSD || opcode == MMC_SEND_CID) {
+ /*
+ * The spec states that CSR and CID accesses have a timeout
+ * of 64 clock cycles.
+ */
+ data.timeout_ns = 0;
+ data.timeout_clks = 64;
+ } else
+ mmc_set_data_timeout(&data, card);
mmc_wait_for_req(host, &mrq);
static const struct sdhci_pci_fixes sdhci_cafe = {
.quirks = SDHCI_QUIRK_NO_SIMULT_VDD_AND_POWER |
+ SDHCI_QUIRK_NO_BUSY_IRQ |
SDHCI_QUIRK_BROKEN_TIMEOUT_VAL,
};
if (host->cmd->data)
DBG("Cannot wait for busy signal when also "
"doing a data transfer");
- else
+ else if (!(host->quirks & SDHCI_QUIRK_NO_BUSY_IRQ))
return;
+
+ /* The controller does not support the end-of-busy IRQ,
+ * fall through and take the SDHCI_INT_RESPONSE */
}
if (intmask & SDHCI_INT_RESPONSE)
#define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12)
/* Controller has an issue with buffer bits for small transfers */
#define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13)
+/* Controller does not provide transfer-complete interrupt when not busy */
+#define SDHCI_QUIRK_NO_BUSY_IRQ (1<<14)
int irq; /* Device IRQ */
void __iomem * ioaddr; /* Mapped address */
if (!(info->flags & IS_POW2PS))
return info;
}
- }
+ } else
+ return info;
}
}
physmap_data = dev->dev.platform_data;
+ if (info->cmtd) {
#ifdef CONFIG_MTD_PARTITIONS
- if (info->nr_parts) {
- del_mtd_partitions(info->cmtd);
- kfree(info->parts);
- } else if (physmap_data->nr_parts)
- del_mtd_partitions(info->cmtd);
- else
- del_mtd_device(info->cmtd);
+ if (info->nr_parts || physmap_data->nr_parts)
+ del_mtd_partitions(info->cmtd);
+ else
+ del_mtd_device(info->cmtd);
#else
- del_mtd_device(info->cmtd);
+ del_mtd_device(info->cmtd);
+#endif
+ }
+#ifdef CONFIG_MTD_PARTITIONS
+ if (info->nr_parts)
+ kfree(info->parts);
#endif
#ifdef CONFIG_MTD_CONCAT
static struct platform_driver orion_nand_driver = {
.probe = orion_nand_probe,
- .remove = orion_nand_remove,
+ .remove = __devexit_p(orion_nand_remove),
.driver = {
.name = "orion_nand",
.owner = THIS_MODULE,
#
obj-$(CONFIG_ARM_AM79C961A) += am79c961a.o
-obj-$(CONFIG_ARM_ETHERH) += etherh.o ../8390.o
+obj-$(CONFIG_ARM_ETHERH) += etherh.o
obj-$(CONFIG_ARM_ETHER3) += ether3.o
obj-$(CONFIG_ARM_ETHER1) += ether1.o
obj-$(CONFIG_ARM_AT91_ETHER) += at91_ether.o
.ndo_open = etherh_open,
.ndo_stop = etherh_close,
.ndo_set_config = etherh_set_config,
- .ndo_start_xmit = ei_start_xmit,
- .ndo_tx_timeout = ei_tx_timeout,
- .ndo_get_stats = ei_get_stats,
- .ndo_set_multicast_list = ei_set_multicast_list,
+ .ndo_start_xmit = __ei_start_xmit,
+ .ndo_tx_timeout = __ei_tx_timeout,
+ .ndo_get_stats = __ei_get_stats,
+ .ndo_set_multicast_list = __ei_set_multicast_list,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_mac_address = eth_mac_addr,
.ndo_change_mtu = eth_change_mtu,
#ifdef CONFIG_NET_POLL_CONTROLLER
- .ndo_poll_controller = ei_poll,
+ .ndo_poll_controller = __ei_poll,
#endif
};
msleep(1);
}
- if (reset_timeout == 0) {
+ if (reset_timeout < 0) {
dev_crit(ksp->dev,
"Timeout waiting for DMA engines to reset\n");
/* And blithely carry on */
static void b44_chip_reset(struct b44 *bp, int reset_kind)
{
struct ssb_device *sdev = bp->sdev;
+ bool was_enabled;
- if (ssb_device_is_enabled(bp->sdev)) {
+ was_enabled = ssb_device_is_enabled(bp->sdev);
+
+ ssb_device_enable(bp->sdev, 0);
+ ssb_pcicore_dev_irqvecs_enable(&sdev->bus->pcicore, sdev);
+
+ if (was_enabled) {
bw32(bp, B44_RCV_LAZY, 0);
bw32(bp, B44_ENET_CTRL, ENET_CTRL_DISABLE);
b44_wait_bit(bp, B44_ENET_CTRL, ENET_CTRL_DISABLE, 200, 1);
}
bw32(bp, B44_DMARX_CTRL, 0);
bp->rx_prod = bp->rx_cons = 0;
- } else
- ssb_pcicore_dev_irqvecs_enable(&sdev->bus->pcicore, sdev);
+ }
- ssb_device_enable(bp->sdev, 0);
b44_clear_stats(bp);
/*
struct net_device *dev = ssb_get_drvdata(sdev);
unregister_netdev(dev);
+ ssb_device_disable(sdev, 0);
ssb_bus_may_powerdown(sdev->bus);
free_netdev(dev);
ssb_pcihost_set_power_state(sdev, PCI_D3hot);
const struct net_device_ops *slave_ops
= slave->dev->netdev_ops;
if (slave_ops->ndo_neigh_setup)
- return slave_ops->ndo_neigh_setup(dev, parms);
+ return slave_ops->ndo_neigh_setup(slave->dev, parms);
}
return 0;
}
spin_lock_irqsave(&priv->txlock, flags);
/* check if there is space to queue this packet */
- if (nr_frags > priv->num_txbdfree) {
+ if ((nr_frags+1) > priv->num_txbdfree) {
/* no space, stop the queue */
netif_stop_queue(dev);
dev->stats.tx_fifo_errors++;
if (this_dev != 0) break; /* only autoprobe 1st one */
printk(KERN_NOTICE "hp-plus.c: Presently autoprobing (not recommended) for a single card.\n");
}
- dev = alloc_ei_netdev();
+ dev = alloc_eip_netdev();
if (!dev)
break;
dev->irq = irq[this_dev];
goto out_inc;
i = atomic_read(&rxring->next_to_clean);
- while (limit-- > 0) {
+ while (limit > 0) {
rxdesc = rxring->desc;
rxdesc += i;
if ((rxdesc->descwb.flags & cpu_to_le16(RXWBFLAG_OWN)) ||
!(rxdesc->descwb.desccnt & RXWBDCNT_WBCPL))
goto out;
+ --limit;
desccnt = rxdesc->descwb.desccnt & RXWBDCNT_DCNT;
adapter->pci_mem_read = netxen_nic_pci_mem_read_2M;
adapter->pci_mem_write = netxen_nic_pci_mem_write_2M;
- mem_ptr0 = ioremap(mem_base, mem_len);
+ mem_ptr0 = pci_ioremap_bar(pdev, 0);
+ if (mem_ptr0 == NULL) {
+ dev_err(&pdev->dev, "failed to map PCI bar 0\n");
+ return -EIO;
+ }
+
pci_len0 = mem_len;
first_page_group_start = 0;
first_page_group_end = 0;
* See if the firmware gave us a virtual-physical port mapping.
*/
adapter->physical_port = adapter->portnum;
- i = adapter->pci_read_normalize(adapter, CRB_V2P(adapter->portnum));
- if (i != 0x55555555)
- adapter->physical_port = i;
+ if (adapter->fw_major < 4) {
+ i = adapter->pci_read_normalize(adapter,
+ CRB_V2P(adapter->portnum));
+ if (i != 0x55555555)
+ adapter->physical_port = i;
+ }
adapter->flags &= ~(NETXEN_NIC_MSI_ENABLED | NETXEN_NIC_MSIX_ENABLED);
DEBUG(3, "%s: in rx_packet(), status %4.4x, rx_status %4.4x.\n",
dev->name, inw(ioaddr+EL3_STATUS), inw(ioaddr+RxStatus));
while (!((rx_status = inw(ioaddr + RxStatus)) & 0x8000) &&
- (--worklimit >= 0)) {
+ worklimit > 0) {
+ worklimit--;
if (rx_status & 0x4000) { /* Error, update stats. */
short error = rx_status & 0x3800;
dev->stats.rx_errors++;
DEBUG(3, "%s: in rx_packet(), status %4.4x, rx_status %4.4x.\n",
dev->name, inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS));
while (!((rx_status = inw(ioaddr + RX_STATUS)) & 0x8000) &&
- (--worklimit >= 0)) {
+ worklimit > 0) {
+ worklimit--;
if (rx_status & 0x4000) { /* Error, update stats. */
short error = rx_status & 0x3800;
dev->stats.rx_errors++;
#define RTL8169_TX_TIMEOUT (6*HZ)
#define RTL8169_PHY_TIMEOUT (10*HZ)
-#define RTL_EEPROM_SIG cpu_to_le32(0x8129)
-#define RTL_EEPROM_SIG_MASK cpu_to_le32(0xffff)
+#define RTL_EEPROM_SIG 0x8129
#define RTL_EEPROM_SIG_ADDR 0x0000
+#define RTL_EEPROM_MAC_ADDR 0x0007
/* write/read MMIO register */
#define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg))
/* Cfg9346Bits */
Cfg9346_Lock = 0x00,
Cfg9346_Unlock = 0xc0,
+ Cfg9346_Program = 0x80, /* Programming mode */
+ Cfg9346_EECS = 0x08, /* Chip select */
+ Cfg9346_EESK = 0x04, /* Serial data clock */
+ Cfg9346_EEDI = 0x02, /* Data input */
+ Cfg9346_EEDO = 0x01, /* Data output */
/* rx_mode_bits */
AcceptErr = 0x20,
/* RxConfigBits */
RxCfgFIFOShift = 13,
RxCfgDMAShift = 8,
+ RxCfg9356SEL = 6, /* EEPROM type: 0 = 9346, 1 = 9356 */
/* TxConfigBits */
TxInterFrameGapShift = 24,
};
+/* Delay between EEPROM clock transitions. Force out buffered PCI writes. */
+#define RTL_EEPROM_DELAY() RTL_R8(Cfg9346)
+#define RTL_EEPROM_READ_CMD 6
+
+/* read 16bit word stored in EEPROM. EEPROM is addressed by words. */
+static u16 rtl_eeprom_read(void __iomem *ioaddr, int addr)
+{
+ u16 result = 0;
+ int cmd, cmd_len, i;
+
+ /* check for EEPROM address size (in bits) */
+ if (RTL_R32(RxConfig) & (1 << RxCfg9356SEL)) {
+ /* EEPROM is 93C56 */
+ cmd_len = 3 + 8; /* 3 bits for command id and 8 for address */
+ cmd = (RTL_EEPROM_READ_CMD << 8) | (addr & 0xff);
+ } else {
+ /* EEPROM is 93C46 */
+ cmd_len = 3 + 6; /* 3 bits for command id and 6 for address */
+ cmd = (RTL_EEPROM_READ_CMD << 6) | (addr & 0x3f);
+ }
+
+ /* enter programming mode */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS);
+ RTL_EEPROM_DELAY();
+
+ /* write command and requested address */
+ while (cmd_len--) {
+ u8 x = Cfg9346_Program | Cfg9346_EECS;
+
+ x |= (cmd & (1 << cmd_len)) ? Cfg9346_EEDI : 0;
+
+ /* write a bit */
+ RTL_W8(Cfg9346, x);
+ RTL_EEPROM_DELAY();
+
+ /* raise clock */
+ RTL_W8(Cfg9346, x | Cfg9346_EESK);
+ RTL_EEPROM_DELAY();
+ }
+
+ /* lower clock */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS);
+ RTL_EEPROM_DELAY();
+
+ /* read back 16bit value */
+ for (i = 16; i > 0; i--) {
+ /* raise clock */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS | Cfg9346_EESK);
+ RTL_EEPROM_DELAY();
+
+ result <<= 1;
+ result |= (RTL_R8(Cfg9346) & Cfg9346_EEDO) ? 1 : 0;
+
+ /* lower clock */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS);
+ RTL_EEPROM_DELAY();
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Program);
+ /* leave programming mode */
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ return result;
+}
+
+static void rtl_init_mac_address(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ struct pci_dev *pdev = tp->pci_dev;
+ u16 x;
+ u8 mac[8];
+
+ /* read EEPROM signature */
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_SIG_ADDR);
+
+ if (x != RTL_EEPROM_SIG) {
+ dev_info(&pdev->dev, "Missing EEPROM signature: %04x\n", x);
+ return;
+ }
+
+ /* read MAC address */
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR);
+ mac[0] = x & 0xff;
+ mac[1] = x >> 8;
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR + 1);
+ mac[2] = x & 0xff;
+ mac[3] = x >> 8;
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR + 2);
+ mac[4] = x & 0xff;
+ mac[5] = x >> 8;
+
+ if (netif_msg_probe(tp)) {
+ DECLARE_MAC_BUF(buf);
+
+ dev_info(&pdev->dev, "MAC address found in EEPROM: %s\n",
+ print_mac(buf, mac));
+ }
+
+ if (is_valid_ether_addr(mac))
+ rtl_rar_set(tp, mac);
+}
+
static int __devinit
rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
tp->mmio_addr = ioaddr;
+ rtl_init_mac_address(tp, ioaddr);
+
/* Get MAC address */
for (i = 0; i < MAC_ADDR_LEN; i++)
dev->dev_addr[i] = RTL_R8(MAC0 + i);
#define SMC_USE_16BIT 0
#define SMC_USE_32BIT 1
#define SMC_IRQ_SENSE IRQF_TRIGGER_LOW
+#elif defined(CONFIG_ARCH_OMAP34XX)
+ #define SMC_USE_16BIT 0
+ #define SMC_USE_32BIT 1
+ #define SMC_IRQ_SENSE IRQF_TRIGGER_LOW
+ #define SMC_MEM_RESERVED 1
+#elif defined(CONFIG_ARCH_OMAP24XX)
+ #define SMC_USE_16BIT 0
+ #define SMC_USE_32BIT 1
+ #define SMC_IRQ_SENSE IRQF_TRIGGER_LOW
+ #define SMC_MEM_RESERVED 1
#else
/*
* Default configuration
#define CHIP_9116 0x0116
#define CHIP_9117 0x0117
#define CHIP_9118 0x0118
+#define CHIP_9211 0x9211
#define CHIP_9215 0x115A
#define CHIP_9217 0x117A
#define CHIP_9218 0x118A
{ CHIP_9116, "LAN9116" },
{ CHIP_9117, "LAN9117" },
{ CHIP_9118, "LAN9118" },
+ { CHIP_9211, "LAN9211" },
{ CHIP_9215, "LAN9215" },
{ CHIP_9217, "LAN9217" },
{ CHIP_9218, "LAN9218" },
break;
} while (val & (GREG_SWRST_TXRST | GREG_SWRST_RXRST));
- if (limit <= 0)
+ if (limit < 0)
printk(KERN_ERR "%s: SW reset is ghetto.\n", gp->dev->name);
if (gp->phy_type == phy_serialink || gp->phy_type == phy_serdes)
{
u32 reg;
- if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS))
+ if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS) ||
+ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)
return;
reg = MII_TG3_MISC_SHDW_WREN |
goto err_out_trdev;
}
- ret = request_irq(pdev->irq, tms380tr_interrupt, IRQF_SHARED,
- dev->name, dev);
- if (ret)
- goto err_out_region;
-
dev->base_addr = pci_ioaddr;
dev->irq = pci_irq_line;
dev->dma = 0;
ret = tmsdev_init(dev, &pdev->dev);
if (ret) {
printk("%s: unable to get memory for dev->priv.\n", dev->name);
- goto err_out_irq;
+ goto err_out_region;
}
tp = netdev_priv(dev);
tp->tmspriv = cardinfo;
+ ret = request_irq(pdev->irq, tms380tr_interrupt, IRQF_SHARED,
+ dev->name, dev);
+ if (ret)
+ goto err_out_tmsdev;
+
dev->open = tms380tr_open;
dev->stop = tms380tr_close;
pci_set_drvdata(pdev, dev);
ret = register_netdev(dev);
if (ret)
- goto err_out_tmsdev;
+ goto err_out_irq;
return 0;
+err_out_irq:
+ free_irq(pdev->irq, dev);
err_out_tmsdev:
pci_set_drvdata(pdev, NULL);
tmsdev_term(dev);
-err_out_irq:
- free_irq(pdev->irq, dev);
err_out_region:
release_region(pci_ioaddr, TMS_PCI_IO_EXTENT);
err_out_trdev:
static int uec_mdio_reset(struct mii_bus *bus)
{
struct ucc_mii_mng __iomem *regs = (void __iomem *)bus->priv;
- unsigned int timeout = PHY_INIT_TIMEOUT;
+ int timeout = PHY_INIT_TIMEOUT;
mutex_lock(&bus->mdio_lock);
mutex_unlock(&bus->mdio_lock);
- if (timeout <= 0) {
+ if (timeout < 0) {
printk(KERN_ERR "%s: The MII Bus is stuck!\n", bus->name);
return -EBUSY;
}
// Cables-to-Go USB Ethernet Adapter
USB_DEVICE(0x0b95, 0x772a),
.driver_info = (unsigned long) &ax88772_info,
+}, {
+ // ABOCOM for pci
+ USB_DEVICE(0x14ea, 0xab11),
+ .driver_info = (unsigned long) &ax88178_info,
+}, {
+ // ASIX 88772a
+ USB_DEVICE(0x0db0, 0xa877),
+ .driver_info = (unsigned long) &ax88772_info,
},
{ }, // END
};
USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long) &cdc_info,
+}, {
+ /* Ericsson F3507g */
+ USB_DEVICE_AND_INTERFACE_INFO(0x0bdb, 0x1900, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long) &cdc_info,
},
{ }, // END
};
},
{
USB_DEVICE(0x0a47, 0x9601), /* Hirose USB-100 */
+ .driver_info = (unsigned long)&dm9601_info,
+ },
+ {
+ USB_DEVICE(0x0fe6, 0x8101), /* DM9601 USB to Fast Ethernet Adapter */
.driver_info = (unsigned long)&dm9601_info,
},
{}, // END
if (dev->mii.mdio_read)
return mii_link_ok(&dev->mii);
- /* Otherwise, say we're up (to avoid breaking scripts) */
- return 1;
+ /* Otherwise, dtrt for drivers calling netif_carrier_{on,off} */
+ return ethtool_op_get_link(net);
}
EXPORT_SYMBOL_GPL(usbnet_get_link);
USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MDLM,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long) &bogus_mdlm_info,
+}, {
+ /* Motorola MOTOMAGX phones */
+ USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x6425, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long) &bogus_mdlm_info,
},
/* Olympus has some models with a Zaurus-compatible option.
return 0;
}
+static int veth_close(struct net_device *dev)
+{
+ struct veth_priv *priv = netdev_priv(dev);
+
+ netif_carrier_off(dev);
+ netif_carrier_off(priv->peer);
+
+ return 0;
+}
+
static int veth_dev_init(struct net_device *dev)
{
struct veth_net_stats *stats;
static const struct net_device_ops veth_netdev_ops = {
.ndo_init = veth_dev_init,
.ndo_open = veth_open,
+ .ndo_stop = veth_close,
.ndo_start_xmit = veth_xmit,
.ndo_get_stats = veth_get_stats,
.ndo_set_mac_address = eth_mac_addr,
dev->destructor = veth_dev_free;
}
-static void veth_change_state(struct net_device *dev)
-{
- struct net_device *peer;
- struct veth_priv *priv;
-
- priv = netdev_priv(dev);
- peer = priv->peer;
-
- if (netif_carrier_ok(peer)) {
- if (!netif_carrier_ok(dev))
- netif_carrier_on(dev);
- } else {
- if (netif_carrier_ok(dev))
- netif_carrier_off(dev);
- }
-}
-
-static int veth_device_event(struct notifier_block *unused,
- unsigned long event, void *ptr)
-{
- struct net_device *dev = ptr;
-
- if (dev->netdev_ops->ndo_open != veth_open)
- goto out;
-
- switch (event) {
- case NETDEV_CHANGE:
- veth_change_state(dev);
- break;
- }
-out:
- return NOTIFY_DONE;
-}
-
-static struct notifier_block veth_notifier_block __read_mostly = {
- .notifier_call = veth_device_event,
-};
-
/*
* netlink interface
*/
static __init int veth_init(void)
{
- register_netdevice_notifier(&veth_notifier_block);
return rtnl_link_register(&veth_link_ops);
}
static __exit void veth_exit(void)
{
rtnl_link_unregister(&veth_link_ops);
- unregister_netdevice_notifier(&veth_notifier_block);
}
module_init(veth_init);
bad:
if (ah)
ath9k_hw_detach(ah);
+ ath9k_exit_debug(sc);
return error;
}
static int ath_attach(u16 devid, struct ath_softc *sc)
{
struct ieee80211_hw *hw = sc->hw;
- int error = 0;
+ int error = 0, i;
DPRINTF(sc, ATH_DBG_CONFIG, "Attach ATH hw\n");
/* initialize tx/rx engine */
error = ath_tx_init(sc, ATH_TXBUF);
if (error != 0)
- goto detach;
+ goto error_attach;
error = ath_rx_init(sc, ATH_RXBUF);
if (error != 0)
- goto detach;
+ goto error_attach;
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
/* Initialze h/w Rfkill */
INIT_DELAYED_WORK(&sc->rf_kill.rfkill_poll, ath_rfkill_poll);
/* Initialize s/w rfkill */
- if (ath_init_sw_rfkill(sc))
- goto detach;
+ error = ath_init_sw_rfkill(sc);
+ if (error)
+ goto error_attach;
#endif
error = ieee80211_register_hw(hw);
ath_init_leds(sc);
return 0;
-detach:
- ath_detach(sc);
+
+error_attach:
+ /* cleanup tx queues */
+ for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++)
+ if (ATH_TXQ_SETUP(sc, i))
+ ath_tx_cleanupq(sc, &sc->tx.txq[i]);
+
+ ath9k_hw_detach(sc->sc_ah);
+ ath9k_exit_debug(sc);
+
return error;
}
}
err = iwl_eeprom_check_version(priv);
if (err)
- goto out_iounmap;
+ goto out_free_eeprom;
/* extract MAC Address */
iwl_eeprom_get_mac(priv, priv->mac_addr);
return 0;
out_remove_sysfs:
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group);
out_uninit_drv:
iwl_uninit_drv(priv);
out_iounmap:
pci_iounmap(pdev, priv->hw_base);
out_pci_release_regions:
- pci_release_regions(pdev);
pci_set_drvdata(pdev, NULL);
+ pci_release_regions(pdev);
out_pci_disable_device:
pci_disable_device(pdev);
out_ieee80211_free_hw:
CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000);
if (err < 0) {
IWL_DEBUG_INFO("Failed to init the card\n");
- goto out_remove_sysfs;
+ goto out_iounmap;
}
/***********************
err = iwl3945_eeprom_init(priv);
if (err) {
IWL_ERROR("Unable to init EEPROM\n");
- goto out_remove_sysfs;
+ goto out_iounmap;
}
/* MAC Address location in EEPROM same for 3945/4965 */
get_eeprom_mac(priv, priv->mac_addr);
err = iwl3945_init_channel_map(priv);
if (err) {
IWL_ERROR("initializing regulatory failed: %d\n", err);
- goto out_release_irq;
+ goto out_unset_hw_setting;
}
err = iwl3945_init_geos(priv);
return 0;
out_remove_sysfs:
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
sysfs_remove_group(&pdev->dev.kobj, &iwl3945_attribute_group);
out_free_geos:
iwl3945_free_geos(priv);
out_free_channel_map:
iwl3945_free_channel_map(priv);
-
-
- out_release_irq:
- destroy_workqueue(priv->workqueue);
- priv->workqueue = NULL;
+ out_unset_hw_setting:
iwl3945_unset_hw_setting(priv);
-
out_iounmap:
pci_iounmap(pdev, priv->hw_base);
out_pci_release_regions:
pci_release_regions(pdev);
out_pci_disable_device:
- pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
+ pci_disable_device(pdev);
out_ieee80211_free_hw:
ieee80211_free_hw(priv->hw);
out:
static void lbs_ethtool_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
snprintf(info->fw_version, 32, "%u.%u.%u.p%u",
priv->fwrelease >> 24 & 0xff,
static int lbs_ethtool_get_eeprom(struct net_device *dev,
struct ethtool_eeprom *eeprom, u8 * bytes)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct cmd_ds_802_11_eeprom_access cmd;
int ret;
static void lbs_ethtool_get_stats(struct net_device *dev,
struct ethtool_stats *stats, uint64_t *data)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
static int lbs_ethtool_get_sset_count(struct net_device *dev, int sset)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
if (sset == ETH_SS_STATS && dev == priv->mesh_dev)
return MESH_STATS_NUM;
static void lbs_ethtool_get_wol(struct net_device *dev,
struct ethtool_wolinfo *wol)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
if (priv->wol_criteria == 0xffffffff) {
/* Interface driver didn't configure wake */
static int lbs_ethtool_set_wol(struct net_device *dev,
struct ethtool_wolinfo *wol)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
uint32_t criteria = 0;
if (priv->wol_criteria == 0xffffffff && wol->wolopts)
static ssize_t if_usb_firmware_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct if_usb_card *cardp = priv->card;
char fwname[FIRMWARE_NAME_MAX];
int ret;
static ssize_t if_usb_boot2_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct if_usb_card *cardp = priv->card;
char fwname[FIRMWARE_NAME_MAX];
int ret;
static ssize_t lbs_anycast_get(struct device *dev,
struct device_attribute *attr, char * buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
static ssize_t lbs_anycast_set(struct device *dev,
struct device_attribute *attr, const char * buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
uint32_t datum;
int ret;
static ssize_t lbs_prb_rsp_limit_get(struct device *dev,
struct device_attribute *attr, char *buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
u32 retry_limit;
static ssize_t lbs_prb_rsp_limit_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
unsigned long retry_limit;
static ssize_t lbs_rtap_get(struct device *dev,
struct device_attribute *attr, char * buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
return snprintf(buf, 5, "0x%X\n", priv->monitormode);
}
struct device_attribute *attr, const char * buf, size_t count)
{
int monitor_mode;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
sscanf(buf, "%x", &monitor_mode);
if (monitor_mode) {
static ssize_t lbs_mesh_get(struct device *dev,
struct device_attribute *attr, char * buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
return snprintf(buf, 5, "0x%X\n", !!priv->mesh_dev);
}
static ssize_t lbs_mesh_set(struct device *dev,
struct device_attribute *attr, const char * buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
int enable;
int ret, action = CMD_ACT_MESH_CONFIG_STOP;
*/
static int lbs_dev_open(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev) ;
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
lbs_deb_enter(LBS_DEB_NET);
*/
static int lbs_eth_stop(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_NET);
static void lbs_tx_timeout(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_TX);
*/
static struct net_device_stats *lbs_get_stats(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_NET);
return &priv->stats;
static int lbs_set_mac_address(struct net_device *dev, void *addr)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct sockaddr *phwaddr = addr;
struct cmd_ds_802_11_mac_address cmd;
static void lbs_set_multicast_list(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
schedule_work(&priv->mcast_work);
}
static int lbs_thread(void *data)
{
struct net_device *dev = data;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
wait_queue_t wait;
lbs_deb_enter(LBS_DEB_THREAD);
goto done;
}
priv = netdev_priv(dev);
+ dev->ml_priv = priv;
if (lbs_init_adapter(priv)) {
lbs_pr_err("failed to initialize adapter structure.\n");
static int mesh_get_default_parameters(struct device *dev,
struct mrvl_mesh_defaults *defs)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
int ret;
static ssize_t bootflag_set(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
uint32_t datum;
int ret;
static ssize_t boottime_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
uint32_t datum;
int ret;
static ssize_t channel_set(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
uint32_t datum;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
int len;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
uint32_t datum;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
uint32_t datum;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
uint32_t datum;
int ret;
union iwreq_data *wrqu, char *extra)
{
DECLARE_SSID_BUF(ssid);
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_point *dwrq, char *extra)
{
#define SCAN_ITEM_SIZE 128
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int err = 0;
char *ev = extra;
char *stop = ev + dwrq->length;
int lbs_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
unsigned long flags;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct txpd *txpd;
char *p802x_hdr;
uint16_t pkt_len;
static int lbs_get_freq(struct net_device *dev, struct iw_request_info *info,
struct iw_freq *fwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct chan_freq_power *cfp;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_wap(struct net_device *dev, struct iw_request_info *info,
struct sockaddr *awrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_set_nick(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_nick(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int mesh_get_nick(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_param *vwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
u32 val = vwrq->value;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_rts(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 val = 0;
static int lbs_set_frag(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u32 val = vwrq->value;
static int lbs_get_frag(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 val = 0;
static int lbs_get_mode(struct net_device *dev,
struct iw_request_info *info, u32 * uwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
s16 curlevel = 0;
int ret = 0;
static int lbs_set_retry(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 slimit = 0, llimit = 0;
static int lbs_get_retry(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 val = 0;
struct iw_point *dwrq, char *extra)
{
int i, j;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct iw_range *range = (struct iw_range *)extra;
struct chan_freq_power *cfp;
u8 rates[MAX_RATES + 1];
static int lbs_set_power(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_power(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
EXCELLENT = 95,
PERFECT = 100
};
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
u32 rssi_qual;
u32 tx_qual;
u32 quality = 0;
struct iw_freq *fwrq, char *extra)
{
int ret = -EINVAL;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct chan_freq_power *cfp;
struct assoc_request * assoc_req;
struct iw_request_info *info,
struct iw_freq *fwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct chan_freq_power *cfp;
int ret = -EINVAL;
static int lbs_set_rate(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
u8 new_rate = 0;
int ret = -EINVAL;
u8 rates[MAX_RATES + 1];
static int lbs_get_rate(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info, u32 * uwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info,
struct iw_point *dwrq, u8 * extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int index = (dwrq->flags & IW_ENCODE_INDEX) - 1;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_point *dwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
u16 is_default = 0, index = 0, set_tx_key = 0;
char *extra)
{
int ret = -EINVAL;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
int index, max_key_len;
char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
int alg = ext->alg;
struct assoc_request * assoc_req;
struct iw_point *dwrq,
char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
struct assoc_request * assoc_req;
char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_param *dwrq,
char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
int ret = 0;
int updated = 0;
char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_param *vwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
s16 dbm = (s16) vwrq->value;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_essid(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_set_essid(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u8 ssid[IW_ESSID_MAX_SIZE];
u8 ssid_len = 0;
struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_set_wap(struct net_device *dev, struct iw_request_info *info,
struct sockaddr *awrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
int ret = 0;
return NOTIFY_DONE;
}
+
+static void orinoco_register_pm_notifier(struct orinoco_private *priv)
+{
+ priv->pm_notifier.notifier_call = orinoco_pm_notifier;
+ register_pm_notifier(&priv->pm_notifier);
+}
+
+static void orinoco_unregister_pm_notifier(struct orinoco_private *priv)
+{
+ unregister_pm_notifier(&priv->pm_notifier);
+}
#else /* !PM_SLEEP || HERMES_CACHE_FW_ON_INIT */
-#define orinoco_pm_notifier NULL
+#define orinoco_register_pm_notifier(priv) do { } while(0)
+#define orinoco_unregister_pm_notifier(priv) do { } while(0)
#endif
/********************************************************************/
priv->cached_fw = NULL;
/* Register PM notifiers */
- priv->pm_notifier.notifier_call = orinoco_pm_notifier;
- register_pm_notifier(&priv->pm_notifier);
+ orinoco_register_pm_notifier(priv);
return dev;
}
kfree(rx_data);
}
- unregister_pm_notifier(&priv->pm_notifier);
+ orinoco_unregister_pm_notifier(priv);
orinoco_uncache_fw(priv);
priv->wpa_ie_len = 0;
__le32 req_id)
{
struct p54_common *priv = dev->priv;
- struct sk_buff *entry = priv->tx_queue.next;
+ struct sk_buff *entry;
unsigned long flags;
spin_lock_irqsave(&priv->tx_queue.lock, flags);
+ entry = priv->tx_queue.next;
while (entry != (struct sk_buff *)&priv->tx_queue) {
struct p54_hdr *hdr = (struct p54_hdr *) entry->data;
struct p54_common *priv = dev->priv;
struct p54_hdr *hdr = (struct p54_hdr *) skb->data;
struct p54_frame_sent *payload = (struct p54_frame_sent *) hdr->data;
- struct sk_buff *entry = (struct sk_buff *) priv->tx_queue.next;
+ struct sk_buff *entry;
u32 addr = le32_to_cpu(hdr->req_id) - priv->headroom;
struct memrecord *range = NULL;
u32 freed = 0;
int count, idx;
spin_lock_irqsave(&priv->tx_queue.lock, flags);
+ entry = (struct sk_buff *) priv->tx_queue.next;
while (entry != (struct sk_buff *)&priv->tx_queue) {
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(entry);
struct p54_hdr *entry_hdr;
struct p54_hdr *data, u32 len)
{
struct p54_common *priv = dev->priv;
- struct sk_buff *entry = priv->tx_queue.next;
+ struct sk_buff *entry;
struct sk_buff *target_skb = NULL;
struct ieee80211_tx_info *info;
struct memrecord *range;
}
}
+ entry = priv->tx_queue.next;
while (left--) {
u32 hole_size;
info = IEEE80211_SKB_CB(entry);
{ USB_DEVICE(0x13b1, 0x000d), USB_DEVICE_DATA(&rt2500usb_ops) },
{ USB_DEVICE(0x13b1, 0x0011), USB_DEVICE_DATA(&rt2500usb_ops) },
{ USB_DEVICE(0x13b1, 0x001a), USB_DEVICE_DATA(&rt2500usb_ops) },
+ /* CNet */
+ { USB_DEVICE(0x1371, 0x9022), USB_DEVICE_DATA(&rt2500usb_ops) },
/* Conceptronic */
{ USB_DEVICE(0x14b2, 0x3c02), USB_DEVICE_DATA(&rt2500usb_ops) },
/* D-LINK */
{ USB_DEVICE(0x148f, 0x2570), USB_DEVICE_DATA(&rt2500usb_ops) },
{ USB_DEVICE(0x148f, 0x2573), USB_DEVICE_DATA(&rt2500usb_ops) },
{ USB_DEVICE(0x148f, 0x9020), USB_DEVICE_DATA(&rt2500usb_ops) },
+ /* Sagem */
+ { USB_DEVICE(0x079b, 0x004b), USB_DEVICE_DATA(&rt2500usb_ops) },
/* Siemens */
{ USB_DEVICE(0x0681, 0x3c06), USB_DEVICE_DATA(&rt2500usb_ops) },
/* SMC */
{ USB_DEVICE(0x0707, 0xee13), USB_DEVICE_DATA(&rt2500usb_ops) },
/* Spairon */
{ USB_DEVICE(0x114b, 0x0110), USB_DEVICE_DATA(&rt2500usb_ops) },
+ /* SURECOM */
+ { USB_DEVICE(0x0769, 0x11f3), USB_DEVICE_DATA(&rt2500usb_ops) },
/* Trust */
{ USB_DEVICE(0x0eb0, 0x9020), USB_DEVICE_DATA(&rt2500usb_ops) },
+ /* VTech */
+ { USB_DEVICE(0x0f88, 0x3012), USB_DEVICE_DATA(&rt2500usb_ops) },
/* Zinwell */
{ USB_DEVICE(0x5a57, 0x0260), USB_DEVICE_DATA(&rt2500usb_ops) },
{ 0, }
*/
static struct usb_device_id rt73usb_device_table[] = {
/* AboCom */
+ { USB_DEVICE(0x07b8, 0xb21b), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x07b8, 0xb21c), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x07b8, 0xb21d), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x07b8, 0xb21e), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x07b8, 0xb21f), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* AL */
+ { USB_DEVICE(0x14b2, 0x3c10), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* Amigo */
+ { USB_DEVICE(0x148f, 0x9021), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x0eb0, 0x9021), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* AMIT */
+ { USB_DEVICE(0x18c5, 0x0002), USB_DEVICE_DATA(&rt73usb_ops) },
/* Askey */
{ USB_DEVICE(0x1690, 0x0722), USB_DEVICE_DATA(&rt73usb_ops) },
/* ASUS */
{ USB_DEVICE(0x050d, 0x905c), USB_DEVICE_DATA(&rt73usb_ops) },
/* Billionton */
{ USB_DEVICE(0x1631, 0xc019), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x08dd, 0x0120), USB_DEVICE_DATA(&rt73usb_ops) },
/* Buffalo */
+ { USB_DEVICE(0x0411, 0x00d8), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x0411, 0x00f4), USB_DEVICE_DATA(&rt73usb_ops) },
/* CNet */
{ USB_DEVICE(0x1371, 0x9022), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x07d1, 0x3c04), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x07d1, 0x3c06), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x07d1, 0x3c07), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* Edimax */
+ { USB_DEVICE(0x7392, 0x7318), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x7392, 0x7618), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* EnGenius */
+ { USB_DEVICE(0x1740, 0x3701), USB_DEVICE_DATA(&rt73usb_ops) },
/* Gemtek */
{ USB_DEVICE(0x15a9, 0x0004), USB_DEVICE_DATA(&rt73usb_ops) },
/* Gigabyte */
{ USB_DEVICE(0x0db0, 0xa861), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x0db0, 0xa874), USB_DEVICE_DATA(&rt73usb_ops) },
/* Ralink */
+ { USB_DEVICE(0x04bb, 0x093d), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x148f, 0x2573), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x148f, 0x2671), USB_DEVICE_DATA(&rt73usb_ops) },
/* Qcom */
{ USB_DEVICE(0x18e8, 0x6196), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x18e8, 0x6229), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x18e8, 0x6238), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* Samsung */
+ { USB_DEVICE(0x04e8, 0x4471), USB_DEVICE_DATA(&rt73usb_ops) },
/* Senao */
{ USB_DEVICE(0x1740, 0x7100), USB_DEVICE_DATA(&rt73usb_ops) },
/* Sitecom */
- { USB_DEVICE(0x0df6, 0x9712), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x0df6, 0x0024), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x0df6, 0x0027), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x0df6, 0x002f), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x0df6, 0x90ac), USB_DEVICE_DATA(&rt73usb_ops) },
+ { USB_DEVICE(0x0df6, 0x9712), USB_DEVICE_DATA(&rt73usb_ops) },
/* Surecom */
{ USB_DEVICE(0x0769, 0x31f3), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* Philips */
+ { USB_DEVICE(0x0471, 0x200a), USB_DEVICE_DATA(&rt73usb_ops) },
/* Planex */
{ USB_DEVICE(0x2019, 0xab01), USB_DEVICE_DATA(&rt73usb_ops) },
{ USB_DEVICE(0x2019, 0xab50), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* Zcom */
+ { USB_DEVICE(0x0cde, 0x001c), USB_DEVICE_DATA(&rt73usb_ops) },
+ /* ZyXEL */
+ { USB_DEVICE(0x0586, 0x3415), USB_DEVICE_DATA(&rt73usb_ops) },
{ 0, }
};
{USB_DEVICE(0x0bda, 0x8189), .driver_info = DEVICE_RTL8187B},
{USB_DEVICE(0x0bda, 0x8197), .driver_info = DEVICE_RTL8187B},
{USB_DEVICE(0x0bda, 0x8198), .driver_info = DEVICE_RTL8187B},
+ /* Surecom */
+ {USB_DEVICE(0x0769, 0x11F2), .driver_info = DEVICE_RTL8187},
+ /* Logitech */
+ {USB_DEVICE(0x0789, 0x010C), .driver_info = DEVICE_RTL8187},
/* Netgear */
{USB_DEVICE(0x0846, 0x6100), .driver_info = DEVICE_RTL8187},
{USB_DEVICE(0x0846, 0x6a00), .driver_info = DEVICE_RTL8187},
/* Sitecom */
{USB_DEVICE(0x0df6, 0x000d), .driver_info = DEVICE_RTL8187},
{USB_DEVICE(0x0df6, 0x0028), .driver_info = DEVICE_RTL8187B},
+ /* Sphairon Access Systems GmbH */
+ {USB_DEVICE(0x114B, 0x0150), .driver_info = DEVICE_RTL8187},
+ /* Dick Smith Electronics */
+ {USB_DEVICE(0x1371, 0x9401), .driver_info = DEVICE_RTL8187},
/* Abocom */
{USB_DEVICE(0x13d1, 0xabe6), .driver_info = DEVICE_RTL8187},
+ /* Qcom */
+ {USB_DEVICE(0x18E8, 0x6232), .driver_info = DEVICE_RTL8187},
+ /* AirLive */
+ {USB_DEVICE(0x1b75, 0x8187), .driver_info = DEVICE_RTL8187},
{}
};
#include <linux/list.h>
#include <linux/netdevice.h>
#include <linux/scatterlist.h>
+#include <linux/skbuff.h>
#include <scsi/libiscsi_tcp.h>
/* from cxgb3 LLD */
struct cxgb3i_conn *cconn;
};
+/**
+ * struct cxgb3i_task_data - private iscsi task data
+ *
+ * @nr_frags: # of coalesced page frags (from scsi sgl)
+ * @frags: coalesced page frags (from scsi sgl)
+ * @skb: tx pdu skb
+ * @offset: data offset for the next pdu
+ * @count: max. possible pdu payload
+ * @sgoffset: offset to the first sg entry for a given offset
+ */
+#define MAX_PDU_FRAGS ((ULP2_MAX_PDU_PAYLOAD + 512 - 1) / 512)
+struct cxgb3i_task_data {
+ unsigned short nr_frags;
+ skb_frag_t frags[MAX_PDU_FRAGS];
+ struct sk_buff *skb;
+ unsigned int offset;
+ unsigned int count;
+ unsigned int sgoffset;
+};
+
int cxgb3i_iscsi_init(void);
void cxgb3i_iscsi_cleanup(void);
write_unlock(&cxgb3i_ddp_rwlock);
ddp_log_info("nppods %u (0x%x ~ 0x%x), bits %u, mask 0x%x,0x%x "
- "pkt %u,%u.\n",
+ "pkt %u/%u, %u/%u.\n",
ppmax, ddp->llimit, ddp->ulimit, ddp->idx_bits,
ddp->idx_mask, ddp->rsvd_tag_mask,
- ddp->max_txsz, ddp->max_rxsz);
+ ddp->max_txsz, uinfo.max_txsz,
+ ddp->max_rxsz, uinfo.max_rxsz);
return 0;
free_ddp_map:
* cxgb3i_adapter_ddp_init - initialize the adapter's ddp resource
* @tdev: t3cdev adapter
* @tformat: tag format
- * @txsz: max tx pkt size, filled in by this func.
- * @rxsz: max rx pkt size, filled in by this func.
+ * @txsz: max tx pdu payload size, filled in by this func.
+ * @rxsz: max rx pdu payload size, filled in by this func.
* initialize the ddp pagepod manager for a given adapter if needed and
* setup the tag format for a given iscsi entity
*/
tformat->sw_bits, tformat->rsvd_bits,
tformat->rsvd_shift, tformat->rsvd_mask);
- *txsz = ddp->max_txsz;
- *rxsz = ddp->max_rxsz;
- ddp_log_info("ddp max pkt size: %u, %u.\n",
- ddp->max_txsz, ddp->max_rxsz);
+ *txsz = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
+ ddp->max_txsz - ISCSI_PDU_NONPAYLOAD_LEN);
+ *rxsz = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
+ ddp->max_rxsz - ISCSI_PDU_NONPAYLOAD_LEN);
+ ddp_log_info("max payload size: %u/%u, %u/%u.\n",
+ *txsz, ddp->max_txsz, *rxsz, ddp->max_rxsz);
return 0;
}
EXPORT_SYMBOL_GPL(cxgb3i_adapter_ddp_init);
#ifndef __CXGB3I_ULP2_DDP_H__
#define __CXGB3I_ULP2_DDP_H__
+#include <linux/vmalloc.h>
+
/**
* struct cxgb3i_tag_format - cxgb3i ulp tag format for an iscsi entity
*
struct sk_buff **gl_skb;
};
+#define ISCSI_PDU_NONPAYLOAD_LEN 312 /* bhs(48) + ahs(256) + digest(8) */
#define ULP2_MAX_PKT_SIZE 16224
-#define ULP2_MAX_PDU_PAYLOAD (ULP2_MAX_PKT_SIZE - ISCSI_PDU_NONPAYLOAD_MAX)
+#define ULP2_MAX_PDU_PAYLOAD (ULP2_MAX_PKT_SIZE - ISCSI_PDU_NONPAYLOAD_LEN)
#define PPOD_PAGES_MAX 4
#define PPOD_PAGES_SHIFT 2 /* 4 pages per pod */
#include "cxgb3i.h"
#define DRV_MODULE_NAME "cxgb3i"
-#define DRV_MODULE_VERSION "1.0.0"
-#define DRV_MODULE_RELDATE "Jun. 1, 2008"
+#define DRV_MODULE_VERSION "1.0.1"
+#define DRV_MODULE_RELDATE "Jan. 2009"
static char version[] =
"Chelsio S3xx iSCSI Driver " DRV_MODULE_NAME
cls_session = iscsi_session_setup(&cxgb3i_iscsi_transport, shost,
cmds_max,
- sizeof(struct iscsi_tcp_task),
+ sizeof(struct iscsi_tcp_task) +
+ sizeof(struct cxgb3i_task_data),
initial_cmdsn, ISCSI_MAX_TARGET);
if (!cls_session)
return NULL;
{
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct cxgb3i_conn *cconn = tcp_conn->dd_data;
- unsigned int max = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
- cconn->hba->snic->tx_max_size -
- ISCSI_PDU_NONPAYLOAD_MAX);
+ unsigned int max = max(512 * MAX_SKB_FRAGS, SKB_TX_HEADROOM);
+ max = min(cconn->hba->snic->tx_max_size, max);
if (conn->max_xmit_dlength)
- conn->max_xmit_dlength = min_t(unsigned int,
- conn->max_xmit_dlength, max);
+ conn->max_xmit_dlength = min(conn->max_xmit_dlength, max);
else
conn->max_xmit_dlength = max;
align_pdu_size(conn->max_xmit_dlength);
- cxgb3i_log_info("conn 0x%p, max xmit %u.\n",
+ cxgb3i_api_debug("conn 0x%p, max xmit %u.\n",
conn, conn->max_xmit_dlength);
return 0;
}
{
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct cxgb3i_conn *cconn = tcp_conn->dd_data;
- unsigned int max = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
- cconn->hba->snic->rx_max_size -
- ISCSI_PDU_NONPAYLOAD_MAX);
+ unsigned int max = cconn->hba->snic->rx_max_size;
align_pdu_size(max);
if (conn->max_recv_dlength) {
conn->max_recv_dlength, max);
return -EINVAL;
}
- conn->max_recv_dlength = min_t(unsigned int,
- conn->max_recv_dlength, max);
+ conn->max_recv_dlength = min(conn->max_recv_dlength, max);
align_pdu_size(conn->max_recv_dlength);
} else
conn->max_recv_dlength = max;
.proc_name = "cxgb3i",
.queuecommand = iscsi_queuecommand,
.change_queue_depth = iscsi_change_queue_depth,
- .can_queue = 128 * (ISCSI_DEF_XMIT_CMDS_MAX - 1),
+ .can_queue = CXGB3I_SCSI_QDEPTH_DFLT - 1,
.sg_tablesize = SG_ALL,
.max_sectors = 0xFFFF,
.cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
#include "cxgb3i_ddp.h"
#ifdef __DEBUG_C3CN_CONN__
-#define c3cn_conn_debug cxgb3i_log_info
+#define c3cn_conn_debug cxgb3i_log_debug
#else
#define c3cn_conn_debug(fmt...)
#endif
#ifdef __DEBUG_C3CN_TX__
-#define c3cn_tx_debug cxgb3i_log_debug
+#define c3cn_tx_debug cxgb3i_log_debug
#else
#define c3cn_tx_debug(fmt...)
#endif
#ifdef __DEBUG_C3CN_RX__
-#define c3cn_rx_debug cxgb3i_log_debug
+#define c3cn_rx_debug cxgb3i_log_debug
#else
#define c3cn_rx_debug(fmt...)
#endif
module_param(cxgb3_rcv_win, int, 0644);
MODULE_PARM_DESC(cxgb3_rcv_win, "TCP receive window in bytes (default=256KB)");
-static int cxgb3_snd_win = 64 * 1024;
+static int cxgb3_snd_win = 128 * 1024;
module_param(cxgb3_snd_win, int, 0644);
-MODULE_PARM_DESC(cxgb3_snd_win, "TCP send window in bytes (default=64KB)");
+MODULE_PARM_DESC(cxgb3_snd_win, "TCP send window in bytes (default=128KB)");
static int cxgb3_rx_credit_thres = 10 * 1024;
module_param(cxgb3_rx_credit_thres, int, 0644);
static void skb_entail(struct s3_conn *c3cn, struct sk_buff *skb,
int flags)
{
- CXGB3_SKB_CB(skb)->seq = c3cn->write_seq;
- CXGB3_SKB_CB(skb)->flags = flags;
+ skb_tcp_seq(skb) = c3cn->write_seq;
+ skb_flags(skb) = flags;
__skb_queue_tail(&c3cn->write_queue, skb);
}
* The number of WRs needed for an skb depends on the number of fragments
* in the skb and whether it has any payload in its main body. This maps the
* length of the gather list represented by an skb into the # of necessary WRs.
- *
- * The max. length of an skb is controlled by the max pdu size which is ~16K.
- * Also, assume the min. fragment length is the sector size (512), then add
- * extra fragment counts for iscsi bhs and payload padding.
+ * The extra two fragments are for iscsi bhs and payload padding.
*/
-#define SKB_WR_LIST_SIZE (16384/512 + 3)
+#define SKB_WR_LIST_SIZE (MAX_SKB_FRAGS + 2)
static unsigned int skb_wrs[SKB_WR_LIST_SIZE] __read_mostly;
static void s3_init_wr_tab(unsigned int wr_len)
static inline void reset_wr_list(struct s3_conn *c3cn)
{
- c3cn->wr_pending_head = NULL;
+ c3cn->wr_pending_head = c3cn->wr_pending_tail = NULL;
}
/*
static inline void enqueue_wr(struct s3_conn *c3cn,
struct sk_buff *skb)
{
- skb_wr_data(skb) = NULL;
+ skb_tx_wr_next(skb) = NULL;
/*
* We want to take an extra reference since both us and the driver
if (!c3cn->wr_pending_head)
c3cn->wr_pending_head = skb;
else
- skb_wr_data(skb) = skb;
+ skb_tx_wr_next(c3cn->wr_pending_tail) = skb;
c3cn->wr_pending_tail = skb;
}
+static int count_pending_wrs(struct s3_conn *c3cn)
+{
+ int n = 0;
+ const struct sk_buff *skb = c3cn->wr_pending_head;
+
+ while (skb) {
+ n += skb->csum;
+ skb = skb_tx_wr_next(skb);
+ }
+ return n;
+}
+
static inline struct sk_buff *peek_wr(const struct s3_conn *c3cn)
{
return c3cn->wr_pending_head;
if (likely(skb)) {
/* Don't bother clearing the tail */
- c3cn->wr_pending_head = skb_wr_data(skb);
- skb_wr_data(skb) = NULL;
+ c3cn->wr_pending_head = skb_tx_wr_next(skb);
+ skb_tx_wr_next(skb) = NULL;
}
return skb;
}
}
static inline void make_tx_data_wr(struct s3_conn *c3cn, struct sk_buff *skb,
- int len)
+ int len, int req_completion)
{
struct tx_data_wr *req;
skb_reset_transport_header(skb);
req = (struct tx_data_wr *)__skb_push(skb, sizeof(*req));
- req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA));
+ req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA) |
+ (req_completion ? F_WR_COMPL : 0));
req->wr_lo = htonl(V_WR_TID(c3cn->tid));
req->sndseq = htonl(c3cn->snd_nxt);
/* len includes the length of any HW ULP additions */
if (unlikely(c3cn->state == C3CN_STATE_CONNECTING ||
c3cn->state == C3CN_STATE_CLOSE_WAIT_1 ||
- c3cn->state == C3CN_STATE_ABORTING)) {
+ c3cn->state >= C3CN_STATE_ABORTING)) {
c3cn_tx_debug("c3cn 0x%p, in closing state %u.\n",
c3cn, c3cn->state);
return 0;
if (c3cn->wr_avail < wrs_needed) {
c3cn_tx_debug("c3cn 0x%p, skb len %u/%u, frag %u, "
"wr %d < %u.\n",
- c3cn, skb->len, skb->datalen, frags,
+ c3cn, skb->len, skb->data_len, frags,
wrs_needed, c3cn->wr_avail);
break;
}
c3cn->wr_unacked += wrs_needed;
enqueue_wr(c3cn, skb);
- if (likely(CXGB3_SKB_CB(skb)->flags & C3CB_FLAG_NEED_HDR)) {
- len += ulp_extra_len(skb);
- make_tx_data_wr(c3cn, skb, len);
- c3cn->snd_nxt += len;
- if ((req_completion
- && c3cn->wr_unacked == wrs_needed)
- || (CXGB3_SKB_CB(skb)->flags & C3CB_FLAG_COMPL)
- || c3cn->wr_unacked >= c3cn->wr_max / 2) {
- struct work_request_hdr *wr = cplhdr(skb);
+ c3cn_tx_debug("c3cn 0x%p, enqueue, skb len %u/%u, frag %u, "
+ "wr %d, left %u, unack %u.\n",
+ c3cn, skb->len, skb->data_len, frags,
+ wrs_needed, c3cn->wr_avail, c3cn->wr_unacked);
+
- wr->wr_hi |= htonl(F_WR_COMPL);
+ if (likely(skb_flags(skb) & C3CB_FLAG_NEED_HDR)) {
+ if ((req_completion &&
+ c3cn->wr_unacked == wrs_needed) ||
+ (skb_flags(skb) & C3CB_FLAG_COMPL) ||
+ c3cn->wr_unacked >= c3cn->wr_max / 2) {
+ req_completion = 1;
c3cn->wr_unacked = 0;
}
- CXGB3_SKB_CB(skb)->flags &= ~C3CB_FLAG_NEED_HDR;
+ len += ulp_extra_len(skb);
+ make_tx_data_wr(c3cn, skb, len, req_completion);
+ c3cn->snd_nxt += len;
+ skb_flags(skb) &= ~C3CB_FLAG_NEED_HDR;
}
total_size += skb->truesize;
if (unlikely(c3cn_flag(c3cn, C3CN_ACTIVE_CLOSE_NEEDED)))
/* upper layer has requested closing */
send_abort_req(c3cn);
- else if (c3cn_push_tx_frames(c3cn, 1))
+ else {
+ if (skb_queue_len(&c3cn->write_queue))
+ c3cn_push_tx_frames(c3cn, 1);
cxgb3i_conn_tx_open(c3cn);
+ }
}
static int do_act_establish(struct t3cdev *cdev, struct sk_buff *skb,
return;
}
- CXGB3_SKB_CB(skb)->seq = ntohl(hdr_cpl->seq);
- CXGB3_SKB_CB(skb)->flags = 0;
+ skb_tcp_seq(skb) = ntohl(hdr_cpl->seq);
+ skb_flags(skb) = 0;
skb_reset_transport_header(skb);
__skb_pull(skb, sizeof(struct cpl_iscsi_hdr));
goto abort_conn;
skb_ulp_mode(skb) = ULP2_FLAG_DATA_READY;
- skb_ulp_pdulen(skb) = ntohs(ddp_cpl.len);
- skb_ulp_ddigest(skb) = ntohl(ddp_cpl.ulp_crc);
+ skb_rx_pdulen(skb) = ntohs(ddp_cpl.len);
+ skb_rx_ddigest(skb) = ntohl(ddp_cpl.ulp_crc);
status = ntohl(ddp_cpl.ddp_status);
c3cn_rx_debug("rx skb 0x%p, len %u, pdulen %u, ddp status 0x%x.\n",
- skb, skb->len, skb_ulp_pdulen(skb), status);
+ skb, skb->len, skb_rx_pdulen(skb), status);
if (status & (1 << RX_DDP_STATUS_HCRC_SHIFT))
skb_ulp_mode(skb) |= ULP2_FLAG_HCRC_ERROR;
} else if (status & (1 << RX_DDP_STATUS_DDP_SHIFT))
skb_ulp_mode(skb) |= ULP2_FLAG_DATA_DDPED;
- c3cn->rcv_nxt = ntohl(ddp_cpl.seq) + skb_ulp_pdulen(skb);
+ c3cn->rcv_nxt = ntohl(ddp_cpl.seq) + skb_rx_pdulen(skb);
__pskb_trim(skb, len);
__skb_queue_tail(&c3cn->receive_queue, skb);
cxgb3i_conn_pdu_ready(c3cn);
* Process an acknowledgment of WR completion. Advance snd_una and send the
* next batch of work requests from the write queue.
*/
+static void check_wr_invariants(struct s3_conn *c3cn)
+{
+ int pending = count_pending_wrs(c3cn);
+
+ if (unlikely(c3cn->wr_avail + pending != c3cn->wr_max))
+ cxgb3i_log_error("TID %u: credit imbalance: avail %u, "
+ "pending %u, total should be %u\n",
+ c3cn->tid, c3cn->wr_avail, pending,
+ c3cn->wr_max);
+}
+
static void process_wr_ack(struct s3_conn *c3cn, struct sk_buff *skb)
{
struct cpl_wr_ack *hdr = cplhdr(skb);
unsigned int credits = ntohs(hdr->credits);
u32 snd_una = ntohl(hdr->snd_una);
+ c3cn_tx_debug("%u WR credits, avail %u, unack %u, TID %u, state %u.\n",
+ credits, c3cn->wr_avail, c3cn->wr_unacked,
+ c3cn->tid, c3cn->state);
+
c3cn->wr_avail += credits;
if (c3cn->wr_unacked > c3cn->wr_max - c3cn->wr_avail)
c3cn->wr_unacked = c3cn->wr_max - c3cn->wr_avail;
break;
}
if (unlikely(credits < p->csum)) {
+ struct tx_data_wr *w = cplhdr(p);
+ cxgb3i_log_error("TID %u got %u WR credits need %u, "
+ "len %u, main body %u, frags %u, "
+ "seq # %u, ACK una %u, ACK nxt %u, "
+ "WR_AVAIL %u, WRs pending %u\n",
+ c3cn->tid, credits, p->csum, p->len,
+ p->len - p->data_len,
+ skb_shinfo(p)->nr_frags,
+ ntohl(w->sndseq), snd_una,
+ ntohl(hdr->snd_nxt), c3cn->wr_avail,
+ count_pending_wrs(c3cn) - credits);
p->csum -= credits;
break;
} else {
}
}
- if (unlikely(before(snd_una, c3cn->snd_una)))
+ check_wr_invariants(c3cn);
+
+ if (unlikely(before(snd_una, c3cn->snd_una))) {
+ cxgb3i_log_error("TID %u, unexpected sequence # %u in WR_ACK "
+ "snd_una %u\n",
+ c3cn->tid, snd_una, c3cn->snd_una);
goto out_free;
+ }
if (c3cn->snd_una != snd_una) {
c3cn->snd_una = snd_una;
dst_confirm(c3cn->dst_cache);
}
- if (skb_queue_len(&c3cn->write_queue) && c3cn_push_tx_frames(c3cn, 0))
+ if (skb_queue_len(&c3cn->write_queue)) {
+ if (c3cn_push_tx_frames(c3cn, 0))
+ cxgb3i_conn_tx_open(c3cn);
+ } else
cxgb3i_conn_tx_open(c3cn);
out_free:
__kfree_skb(skb);
struct dst_entry *dst)
{
BUG_ON(c3cn->cdev != cdev);
- c3cn->wr_max = c3cn->wr_avail = T3C_DATA(cdev)->max_wrs;
+ c3cn->wr_max = c3cn->wr_avail = T3C_DATA(cdev)->max_wrs - 1;
c3cn->wr_unacked = 0;
c3cn->mss_idx = select_mss(c3cn, dst_mtu(dst));
goto out_err;
}
- err = -EPIPE;
if (c3cn->err) {
c3cn_tx_debug("c3cn 0x%p, err %d.\n", c3cn, c3cn->err);
+ err = -EPIPE;
+ goto out_err;
+ }
+
+ if (c3cn->write_seq - c3cn->snd_una >= cxgb3_snd_win) {
+ c3cn_tx_debug("c3cn 0x%p, snd %u - %u > %u.\n",
+ c3cn, c3cn->write_seq, c3cn->snd_una,
+ cxgb3_snd_win);
+ err = -EAGAIN;
goto out_err;
}
* @flag: see C3CB_FLAG_* below
* @ulp_mode: ULP mode/submode of sk_buff
* @seq: tcp sequence number
- * @ddigest: pdu data digest
- * @pdulen: recovered pdu length
- * @wr_data: scratch area for tx wr
*/
+struct cxgb3_skb_rx_cb {
+ __u32 ddigest; /* data digest */
+ __u32 pdulen; /* recovered pdu length */
+};
+
+struct cxgb3_skb_tx_cb {
+ struct sk_buff *wr_next; /* next wr */
+};
+
struct cxgb3_skb_cb {
__u8 flags;
__u8 ulp_mode;
__u32 seq;
- __u32 ddigest;
- __u32 pdulen;
- struct sk_buff *wr_data;
+ union {
+ struct cxgb3_skb_rx_cb rx;
+ struct cxgb3_skb_tx_cb tx;
+ };
};
#define CXGB3_SKB_CB(skb) ((struct cxgb3_skb_cb *)&((skb)->cb[0]))
-
+#define skb_flags(skb) (CXGB3_SKB_CB(skb)->flags)
#define skb_ulp_mode(skb) (CXGB3_SKB_CB(skb)->ulp_mode)
-#define skb_ulp_ddigest(skb) (CXGB3_SKB_CB(skb)->ddigest)
-#define skb_ulp_pdulen(skb) (CXGB3_SKB_CB(skb)->pdulen)
-#define skb_wr_data(skb) (CXGB3_SKB_CB(skb)->wr_data)
+#define skb_tcp_seq(skb) (CXGB3_SKB_CB(skb)->seq)
+#define skb_rx_ddigest(skb) (CXGB3_SKB_CB(skb)->rx.ddigest)
+#define skb_rx_pdulen(skb) (CXGB3_SKB_CB(skb)->rx.pdulen)
+#define skb_tx_wr_next(skb) (CXGB3_SKB_CB(skb)->tx.wr_next)
enum c3cb_flags {
C3CB_FLAG_NEED_HDR = 1 << 0, /* packet needs a TX_DATA_WR header */
/* for TX: a skb must have a headroom of at least TX_HEADER_LEN bytes */
#define TX_HEADER_LEN \
(sizeof(struct tx_data_wr) + sizeof(struct sge_opaque_hdr))
+#define SKB_TX_HEADROOM SKB_MAX_HEAD(TX_HEADER_LEN)
/*
* get and set private ip for iscsi traffic
#define cxgb3i_tx_debug(fmt...)
#endif
+/* always allocate rooms for AHS */
+#define SKB_TX_PDU_HEADER_LEN \
+ (sizeof(struct iscsi_hdr) + ISCSI_MAX_AHS_SIZE)
+static unsigned int skb_extra_headroom;
static struct page *pad_page;
/*
void cxgb3i_conn_cleanup_task(struct iscsi_task *task)
{
- struct iscsi_tcp_task *tcp_task = task->dd_data;
+ struct cxgb3i_task_data *tdata = task->dd_data +
+ sizeof(struct iscsi_tcp_task);
/* never reached the xmit task callout */
- if (tcp_task->dd_data)
- kfree_skb(tcp_task->dd_data);
- tcp_task->dd_data = NULL;
+ if (tdata->skb)
+ __kfree_skb(tdata->skb);
+ memset(tdata, 0, sizeof(struct cxgb3i_task_data));
/* MNC - Do we need a check in case this is called but
* cxgb3i_conn_alloc_pdu has never been called on the task */
iscsi_tcp_cleanup_task(task);
}
-/*
- * We do not support ahs yet
- */
+static int sgl_seek_offset(struct scatterlist *sgl, unsigned int sgcnt,
+ unsigned int offset, unsigned int *off,
+ struct scatterlist **sgp)
+{
+ int i;
+ struct scatterlist *sg;
+
+ for_each_sg(sgl, sg, sgcnt, i) {
+ if (offset < sg->length) {
+ *off = offset;
+ *sgp = sg;
+ return 0;
+ }
+ offset -= sg->length;
+ }
+ return -EFAULT;
+}
+
+static int sgl_read_to_frags(struct scatterlist *sg, unsigned int sgoffset,
+ unsigned int dlen, skb_frag_t *frags,
+ int frag_max)
+{
+ unsigned int datalen = dlen;
+ unsigned int sglen = sg->length - sgoffset;
+ struct page *page = sg_page(sg);
+ int i;
+
+ i = 0;
+ do {
+ unsigned int copy;
+
+ if (!sglen) {
+ sg = sg_next(sg);
+ if (!sg) {
+ cxgb3i_log_error("%s, sg NULL, len %u/%u.\n",
+ __func__, datalen, dlen);
+ return -EINVAL;
+ }
+ sgoffset = 0;
+ sglen = sg->length;
+ page = sg_page(sg);
+
+ }
+ copy = min(datalen, sglen);
+ if (i && page == frags[i - 1].page &&
+ sgoffset + sg->offset ==
+ frags[i - 1].page_offset + frags[i - 1].size) {
+ frags[i - 1].size += copy;
+ } else {
+ if (i >= frag_max) {
+ cxgb3i_log_error("%s, too many pages %u, "
+ "dlen %u.\n", __func__,
+ frag_max, dlen);
+ return -EINVAL;
+ }
+
+ frags[i].page = page;
+ frags[i].page_offset = sg->offset + sgoffset;
+ frags[i].size = copy;
+ i++;
+ }
+ datalen -= copy;
+ sgoffset += copy;
+ sglen -= copy;
+ } while (datalen);
+
+ return i;
+}
+
int cxgb3i_conn_alloc_pdu(struct iscsi_task *task, u8 opcode)
{
+ struct iscsi_conn *conn = task->conn;
struct iscsi_tcp_task *tcp_task = task->dd_data;
- struct sk_buff *skb;
+ struct cxgb3i_task_data *tdata = task->dd_data + sizeof(*tcp_task);
+ struct scsi_cmnd *sc = task->sc;
+ int headroom = SKB_TX_PDU_HEADER_LEN;
+ tcp_task->dd_data = tdata;
task->hdr = NULL;
- /* always allocate rooms for AHS */
- skb = alloc_skb(sizeof(struct iscsi_hdr) + ISCSI_MAX_AHS_SIZE +
- TX_HEADER_LEN, GFP_ATOMIC);
- if (!skb)
+
+ /* write command, need to send data pdus */
+ if (skb_extra_headroom && (opcode == ISCSI_OP_SCSI_DATA_OUT ||
+ (opcode == ISCSI_OP_SCSI_CMD &&
+ (scsi_bidi_cmnd(sc) || sc->sc_data_direction == DMA_TO_DEVICE))))
+ headroom += min(skb_extra_headroom, conn->max_xmit_dlength);
+
+ tdata->skb = alloc_skb(TX_HEADER_LEN + headroom, GFP_ATOMIC);
+ if (!tdata->skb)
return -ENOMEM;
+ skb_reserve(tdata->skb, TX_HEADER_LEN);
cxgb3i_tx_debug("task 0x%p, opcode 0x%x, skb 0x%p.\n",
- task, opcode, skb);
+ task, opcode, tdata->skb);
- tcp_task->dd_data = skb;
- skb_reserve(skb, TX_HEADER_LEN);
- task->hdr = (struct iscsi_hdr *)skb->data;
- task->hdr_max = sizeof(struct iscsi_hdr);
+ task->hdr = (struct iscsi_hdr *)tdata->skb->data;
+ task->hdr_max = SKB_TX_PDU_HEADER_LEN;
/* data_out uses scsi_cmd's itt */
if (opcode != ISCSI_OP_SCSI_DATA_OUT)
int cxgb3i_conn_init_pdu(struct iscsi_task *task, unsigned int offset,
unsigned int count)
{
- struct iscsi_tcp_task *tcp_task = task->dd_data;
- struct sk_buff *skb = tcp_task->dd_data;
struct iscsi_conn *conn = task->conn;
- struct page *pg;
+ struct iscsi_tcp_task *tcp_task = task->dd_data;
+ struct cxgb3i_task_data *tdata = tcp_task->dd_data;
+ struct sk_buff *skb = tdata->skb;
unsigned int datalen = count;
int i, padlen = iscsi_padding(count);
- skb_frag_t *frag;
+ struct page *pg;
cxgb3i_tx_debug("task 0x%p,0x%p, offset %u, count %u, skb 0x%p.\n",
task, task->sc, offset, count, skb);
return 0;
if (task->sc) {
- struct scatterlist *sg;
- struct scsi_data_buffer *sdb;
- unsigned int sgoffset = offset;
- struct page *sgpg;
- unsigned int sglen;
-
- sdb = scsi_out(task->sc);
- sg = sdb->table.sgl;
-
- for_each_sg(sdb->table.sgl, sg, sdb->table.nents, i) {
- cxgb3i_tx_debug("sg %d, page 0x%p, len %u offset %u\n",
- i, sg_page(sg), sg->length, sg->offset);
-
- if (sgoffset < sg->length)
- break;
- sgoffset -= sg->length;
+ struct scsi_data_buffer *sdb = scsi_out(task->sc);
+ struct scatterlist *sg = NULL;
+ int err;
+
+ tdata->offset = offset;
+ tdata->count = count;
+ err = sgl_seek_offset(sdb->table.sgl, sdb->table.nents,
+ tdata->offset, &tdata->sgoffset, &sg);
+ if (err < 0) {
+ cxgb3i_log_warn("tpdu, sgl %u, bad offset %u/%u.\n",
+ sdb->table.nents, tdata->offset,
+ sdb->length);
+ return err;
}
- sgpg = sg_page(sg);
- sglen = sg->length - sgoffset;
-
- do {
- int j = skb_shinfo(skb)->nr_frags;
- unsigned int copy;
-
- if (!sglen) {
- sg = sg_next(sg);
- sgpg = sg_page(sg);
- sgoffset = 0;
- sglen = sg->length;
- ++i;
+ err = sgl_read_to_frags(sg, tdata->sgoffset, tdata->count,
+ tdata->frags, MAX_PDU_FRAGS);
+ if (err < 0) {
+ cxgb3i_log_warn("tpdu, sgl %u, bad offset %u + %u.\n",
+ sdb->table.nents, tdata->offset,
+ tdata->count);
+ return err;
+ }
+ tdata->nr_frags = err;
+
+ if (tdata->nr_frags > MAX_SKB_FRAGS ||
+ (padlen && tdata->nr_frags == MAX_SKB_FRAGS)) {
+ char *dst = skb->data + task->hdr_len;
+ skb_frag_t *frag = tdata->frags;
+
+ /* data fits in the skb's headroom */
+ for (i = 0; i < tdata->nr_frags; i++, frag++) {
+ char *src = kmap_atomic(frag->page,
+ KM_SOFTIRQ0);
+
+ memcpy(dst, src+frag->page_offset, frag->size);
+ dst += frag->size;
+ kunmap_atomic(src, KM_SOFTIRQ0);
}
- copy = min(sglen, datalen);
- if (j && skb_can_coalesce(skb, j, sgpg,
- sg->offset + sgoffset)) {
- skb_shinfo(skb)->frags[j - 1].size += copy;
- } else {
- get_page(sgpg);
- skb_fill_page_desc(skb, j, sgpg,
- sg->offset + sgoffset, copy);
+ if (padlen) {
+ memset(dst, 0, padlen);
+ padlen = 0;
}
- sgoffset += copy;
- sglen -= copy;
- datalen -= copy;
- } while (datalen);
+ skb_put(skb, count + padlen);
+ } else {
+ /* data fit into frag_list */
+ for (i = 0; i < tdata->nr_frags; i++)
+ get_page(tdata->frags[i].page);
+
+ memcpy(skb_shinfo(skb)->frags, tdata->frags,
+ sizeof(skb_frag_t) * tdata->nr_frags);
+ skb_shinfo(skb)->nr_frags = tdata->nr_frags;
+ skb->len += count;
+ skb->data_len += count;
+ skb->truesize += count;
+ }
+
} else {
pg = virt_to_page(task->data);
- while (datalen) {
- i = skb_shinfo(skb)->nr_frags;
- frag = &skb_shinfo(skb)->frags[i];
-
- get_page(pg);
- frag->page = pg;
- frag->page_offset = 0;
- frag->size = min((unsigned int)PAGE_SIZE, datalen);
-
- skb_shinfo(skb)->nr_frags++;
- datalen -= frag->size;
- pg++;
- }
+ get_page(pg);
+ skb_fill_page_desc(skb, 0, pg, offset_in_page(task->data),
+ count);
+ skb->len += count;
+ skb->data_len += count;
+ skb->truesize += count;
}
if (padlen) {
i = skb_shinfo(skb)->nr_frags;
- frag = &skb_shinfo(skb)->frags[i];
- frag->page = pad_page;
- frag->page_offset = 0;
- frag->size = padlen;
- skb_shinfo(skb)->nr_frags++;
+ get_page(pad_page);
+ skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, pad_page, 0,
+ padlen);
+
+ skb->data_len += padlen;
+ skb->truesize += padlen;
+ skb->len += padlen;
}
- datalen = count + padlen;
- skb->data_len += datalen;
- skb->truesize += datalen;
- skb->len += datalen;
return 0;
}
int cxgb3i_conn_xmit_pdu(struct iscsi_task *task)
{
- struct iscsi_tcp_task *tcp_task = task->dd_data;
- struct sk_buff *skb = tcp_task->dd_data;
struct iscsi_tcp_conn *tcp_conn = task->conn->dd_data;
struct cxgb3i_conn *cconn = tcp_conn->dd_data;
+ struct iscsi_tcp_task *tcp_task = task->dd_data;
+ struct cxgb3i_task_data *tdata = tcp_task->dd_data;
+ struct sk_buff *skb = tdata->skb;
unsigned int datalen;
int err;
return 0;
datalen = skb->data_len;
- tcp_task->dd_data = NULL;
+ tdata->skb = NULL;
err = cxgb3i_c3cn_send_pdus(cconn->cep->c3cn, skb);
- cxgb3i_tx_debug("task 0x%p, skb 0x%p, len %u/%u, rv %d.\n",
- task, skb, skb->len, skb->data_len, err);
if (err > 0) {
int pdulen = err;
+ cxgb3i_tx_debug("task 0x%p, skb 0x%p, len %u/%u, rv %d.\n",
+ task, skb, skb->len, skb->data_len, err);
+
if (task->conn->hdrdgst_en)
pdulen += ISCSI_DIGEST_SIZE;
if (datalen && task->conn->datadgst_en)
return err;
}
/* reset skb to send when we are called again */
- tcp_task->dd_data = skb;
+ tdata->skb = skb;
return -EAGAIN;
}
int cxgb3i_pdu_init(void)
{
+ if (SKB_TX_HEADROOM > (512 * MAX_SKB_FRAGS))
+ skb_extra_headroom = SKB_TX_HEADROOM;
pad_page = alloc_page(GFP_KERNEL);
if (!pad_page)
return -ENOMEM;
skb = skb_peek(&c3cn->receive_queue);
while (!err && skb) {
__skb_unlink(skb, &c3cn->receive_queue);
- read += skb_ulp_pdulen(skb);
+ read += skb_rx_pdulen(skb);
+ cxgb3i_rx_debug("conn 0x%p, cn 0x%p, rx skb 0x%p, pdulen %u.\n",
+ conn, c3cn, skb, skb_rx_pdulen(skb));
err = cxgb3i_conn_read_pdu_skb(conn, skb);
__kfree_skb(skb);
skb = skb_peek(&c3cn->receive_queue);
cxgb3i_c3cn_rx_credits(c3cn, read);
}
conn->rxdata_octets += read;
+
+ if (err) {
+ cxgb3i_log_info("conn 0x%p rx failed err %d.\n", conn, err);
+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+ }
}
void cxgb3i_conn_tx_open(struct s3_conn *c3cn)
#define ULP2_FLAG_DCRC_ERROR 0x20
#define ULP2_FLAG_PAD_ERROR 0x40
-void cxgb3i_conn_closing(struct s3_conn *);
+void cxgb3i_conn_closing(struct s3_conn *c3cn);
void cxgb3i_conn_pdu_ready(struct s3_conn *c3cn);
void cxgb3i_conn_tx_open(struct s3_conn *c3cn);
#endif
{ PCI_VDEVICE(TTI, 0x3530), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x3560), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4322), (kernel_ulong_t)&hptiop_itl_ops },
+ { PCI_VDEVICE(TTI, 0x4321), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4210), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4211), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4310), (kernel_ulong_t)&hptiop_itl_ops },
action = ACTION_FAIL;
break;
case ABORTED_COMMAND:
+ action = ACTION_FAIL;
if (sshdr.asc == 0x10) { /* DIF */
description = "Target Data Integrity Failure";
- action = ACTION_FAIL;
error = -EILSEQ;
- } else
- action = ACTION_RETRY;
+ }
break;
case NOT_READY:
/* If the device is in the process of becoming
static void sd_print_sense_hdr(struct scsi_disk *, struct scsi_sense_hdr *);
static void sd_print_result(struct scsi_disk *, int);
+static DEFINE_SPINLOCK(sd_index_lock);
static DEFINE_IDA(sd_index_ida);
/* This semaphore is used to mediate the 0->1 reference get in the
if (!ida_pre_get(&sd_index_ida, GFP_KERNEL))
goto out_put;
+ spin_lock(&sd_index_lock);
error = ida_get_new(&sd_index_ida, &index);
+ spin_unlock(&sd_index_lock);
} while (error == -EAGAIN);
if (error)
return 0;
out_free_index:
+ spin_lock(&sd_index_lock);
ida_remove(&sd_index_ida, index);
+ spin_unlock(&sd_index_lock);
out_put:
put_disk(gd);
out_free:
struct scsi_disk *sdkp = to_scsi_disk(dev);
struct gendisk *disk = sdkp->disk;
+ spin_lock(&sd_index_lock);
ida_remove(&sd_index_ida, sdkp->index);
+ spin_unlock(&sd_index_lock);
disk->private_data = NULL;
put_disk(disk);
struct i810fb_par *par = info->par;
int line_length, vidmem, mode_valid = 0, retval = 0;
u32 vyres = var->yres_virtual, vxres = var->xres_virtual;
+
/*
* Memory limit
*/
if (vidmem > par->fb.size) {
vyres = par->fb.size/line_length;
if (vyres < var->yres) {
- vyres = yres;
+ vyres = info->var.yres;
vxres = par->fb.size/vyres;
vxres /= var->bits_per_pixel >> 3;
line_length = get_line_length(par, vxres,
var->bits_per_pixel);
- vidmem = line_length * yres;
+ vidmem = line_length * info->var.yres;
if (vxres < var->xres) {
printk("i810fb: required video memory, "
"%d bytes, for %dx%d-%d (virtual) "
static struct platform_driver pxafb_driver = {
.probe = pxafb_probe,
- .remove = pxafb_remove,
+ .remove = __devexit_p(pxafb_remove),
.suspend = pxafb_suspend,
.resume = pxafb_resume,
.driver = {
{
struct sh_mobile_lcdc_chan *ch;
struct sh_mobile_lcdc_board_cfg *board_cfg;
- unsigned long tmp;
int k;
/* tell the board code to disable the panel */
if (board_cfg->display_off)
board_cfg->display_off(board_cfg->board_data);
- /* cleanup deferred io if SYS bus */
- tmp = ch->cfg.sys_bus_cfg.deferred_io_msec;
- if (ch->ldmt1r_value & (1 << 12) && tmp) {
+ /* cleanup deferred io if enabled */
+ if (ch->info.fbdefio) {
fb_deferred_io_cleanup(&ch->info);
ch->info.fbdefio = NULL;
}
bus_clk = 133; /* in MHz */
freq = fsl_get_sys_freq();
- if (freq > 0)
+ if (freq != -1)
bus_clk = freq;
/* Map devices registers into memory */
#include <linux/watchdog.h>
#include <linux/io.h>
#include <linux/uaccess.h>
+#include <mach/timex.h>
#include <mach/regs-timer.h>
#define WDT_DEFAULT_TIME 5 /* seconds */
#define WDT_EN 0x0010
#define WDT_VAL (TIMER_VIRT_BASE + 0x0024)
+#define ORION5X_TCLK 166666667
#define WDT_MAX_DURATION (0xffffffff / ORION5X_TCLK)
#define WDT_IN_USE 0
#define WDT_OK_TO_CLOSE 1
#include <asm/time.h>
#include <asm/mach-rc32434/integ.h>
-#define MAX_TIMEOUT 20
-#define RC32434_WDT_INTERVAL (15 * HZ)
-
-#define VERSION "0.2"
+#define VERSION "0.4"
static struct {
- struct completion stop;
- int running;
- struct timer_list timer;
- int queue;
- int default_ticks;
unsigned long inuse;
} rc32434_wdt_device;
static struct integ __iomem *wdt_reg;
-static int ticks = 100 * HZ;
static int expect_close;
-static int timeout;
+
+/* Board internal clock speed in Hz,
+ * the watchdog timer ticks at. */
+extern unsigned int idt_cpu_freq;
+
+/* translate wtcompare value to seconds and vice versa */
+#define WTCOMP2SEC(x) (x / idt_cpu_freq)
+#define SEC2WTCOMP(x) (x * idt_cpu_freq)
+
+/* Use a default timeout of 20s. This should be
+ * safe for CPU clock speeds up to 400MHz, as
+ * ((2 ^ 32) - 1) / (400MHz / 2) = 21s. */
+#define WATCHDOG_TIMEOUT 20
+
+static int timeout = WATCHDOG_TIMEOUT;
static int nowayout = WATCHDOG_NOWAYOUT;
module_param(nowayout, int, 0);
MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default="
__MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
+/* apply or and nand masks to data read from addr and write back */
+#define SET_BITS(addr, or, nand) \
+ writel((readl(&addr) | or) & ~nand, &addr)
static void rc32434_wdt_start(void)
{
- u32 val;
-
- if (!rc32434_wdt_device.inuse) {
- writel(0, &wdt_reg->wtcount);
+ u32 or, nand;
- val = RC32434_ERR_WRE;
- writel(readl(&wdt_reg->errcs) | val, &wdt_reg->errcs);
+ /* zero the counter before enabling */
+ writel(0, &wdt_reg->wtcount);
- val = RC32434_WTC_EN;
- writel(readl(&wdt_reg->wtc) | val, &wdt_reg->wtc);
- }
- rc32434_wdt_device.running++;
-}
+ /* don't generate a non-maskable interrupt,
+ * do a warm reset instead */
+ nand = 1 << RC32434_ERR_WNE;
+ or = 1 << RC32434_ERR_WRE;
-static void rc32434_wdt_stop(void)
-{
- u32 val;
+ /* reset the ERRCS timeout bit in case it's set */
+ nand |= 1 << RC32434_ERR_WTO;
- if (rc32434_wdt_device.running) {
+ SET_BITS(wdt_reg->errcs, or, nand);
- val = ~RC32434_WTC_EN;
- writel(readl(&wdt_reg->wtc) & val, &wdt_reg->wtc);
+ /* reset WTC timeout bit and enable WDT */
+ nand = 1 << RC32434_WTC_TO;
+ or = 1 << RC32434_WTC_EN;
- val = ~RC32434_ERR_WRE;
- writel(readl(&wdt_reg->errcs) & val, &wdt_reg->errcs);
+ SET_BITS(wdt_reg->wtc, or, nand);
+}
- rc32434_wdt_device.running = 0;
- }
+static void rc32434_wdt_stop(void)
+{
+ /* Disable WDT */
+ SET_BITS(wdt_reg->wtc, 0, 1 << RC32434_WTC_EN);
}
-static void rc32434_wdt_set(int new_timeout)
+static int rc32434_wdt_set(int new_timeout)
{
- u32 cmp = new_timeout * HZ;
- u32 state, val;
+ int max_to = WTCOMP2SEC((u32)-1);
+ if (new_timeout < 0 || new_timeout > max_to) {
+ printk(KERN_ERR KBUILD_MODNAME
+ ": timeout value must be between 0 and %d",
+ max_to);
+ return -EINVAL;
+ }
timeout = new_timeout;
- /*
- * store and disable WTC
- */
- state = (u32)(readl(&wdt_reg->wtc) & RC32434_WTC_EN);
- val = ~RC32434_WTC_EN;
- writel(readl(&wdt_reg->wtc) & val, &wdt_reg->wtc);
-
- writel(0, &wdt_reg->wtcount);
- writel(cmp, &wdt_reg->wtcompare);
-
- /*
- * restore WTC
- */
-
- writel(readl(&wdt_reg->wtc) | state, &wdt_reg);
-}
+ writel(SEC2WTCOMP(timeout), &wdt_reg->wtcompare);
-static void rc32434_wdt_reset(void)
-{
- ticks = rc32434_wdt_device.default_ticks;
+ return 0;
}
-static void rc32434_wdt_update(unsigned long unused)
+static void rc32434_wdt_ping(void)
{
- if (rc32434_wdt_device.running)
- ticks--;
-
writel(0, &wdt_reg->wtcount);
-
- if (rc32434_wdt_device.queue && ticks)
- mod_timer(&rc32434_wdt_device.timer,
- jiffies + RC32434_WDT_INTERVAL);
- else
- complete(&rc32434_wdt_device.stop);
}
static int rc32434_wdt_open(struct inode *inode, struct file *file)
if (nowayout)
__module_get(THIS_MODULE);
+ rc32434_wdt_start();
+ rc32434_wdt_ping();
+
return nonseekable_open(inode, file);
}
static int rc32434_wdt_release(struct inode *inode, struct file *file)
{
- if (expect_close && nowayout == 0) {
+ if (expect_close == 42) {
rc32434_wdt_stop();
printk(KERN_INFO KBUILD_MODNAME ": disabling watchdog timer\n");
module_put(THIS_MODULE);
- } else
+ } else {
printk(KERN_CRIT KBUILD_MODNAME
": device closed unexpectedly. WDT will not stop !\n");
-
+ rc32434_wdt_ping();
+ }
clear_bit(0, &rc32434_wdt_device.inuse);
return 0;
}
if (get_user(c, data + i))
return -EFAULT;
if (c == 'V')
- expect_close = 1;
+ expect_close = 42;
}
}
- rc32434_wdt_update(0);
+ rc32434_wdt_ping();
return len;
}
return 0;
};
switch (cmd) {
case WDIOC_KEEPALIVE:
- rc32434_wdt_reset();
+ rc32434_wdt_ping();
break;
case WDIOC_GETSTATUS:
case WDIOC_GETBOOTSTATUS:
- value = readl(&wdt_reg->wtcount);
+ value = 0;
if (copy_to_user(argp, &value, sizeof(int)))
return -EFAULT;
break;
break;
case WDIOS_DISABLECARD:
rc32434_wdt_stop();
+ break;
default:
return -EINVAL;
}
case WDIOC_SETTIMEOUT:
if (copy_from_user(&new_timeout, argp, sizeof(int)))
return -EFAULT;
- if (new_timeout < 1)
+ if (rc32434_wdt_set(new_timeout))
return -EINVAL;
- if (new_timeout > MAX_TIMEOUT)
- return -EINVAL;
- rc32434_wdt_set(new_timeout);
+ /* Fall through */
case WDIOC_GETTIMEOUT:
return copy_to_user(argp, &timeout, sizeof(int));
default:
.fops = &rc32434_wdt_fops,
};
-static char banner[] = KERN_INFO KBUILD_MODNAME
+static char banner[] __devinitdata = KERN_INFO KBUILD_MODNAME
": Watchdog Timer version " VERSION ", timer margin: %d sec\n";
-static int rc32434_wdt_probe(struct platform_device *pdev)
+static int __devinit rc32434_wdt_probe(struct platform_device *pdev)
{
int ret;
struct resource *r;
- r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rb500_wdt_res");
+ r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rb532_wdt_res");
if (!r) {
printk(KERN_ERR KBUILD_MODNAME
"failed to retrieve resources\n");
}
ret = misc_register(&rc32434_wdt_miscdev);
-
if (ret < 0) {
printk(KERN_ERR KBUILD_MODNAME
"failed to register watchdog device\n");
goto unmap;
}
- init_completion(&rc32434_wdt_device.stop);
- rc32434_wdt_device.queue = 0;
-
- clear_bit(0, &rc32434_wdt_device.inuse);
-
- setup_timer(&rc32434_wdt_device.timer, rc32434_wdt_update, 0L);
-
- rc32434_wdt_device.default_ticks = ticks;
-
- rc32434_wdt_start();
-
printk(banner, timeout);
return 0;
return ret;
}
-static int rc32434_wdt_remove(struct platform_device *pdev)
+static int __devexit rc32434_wdt_remove(struct platform_device *pdev)
{
- if (rc32434_wdt_device.queue) {
- rc32434_wdt_device.queue = 0;
- wait_for_completion(&rc32434_wdt_device.stop);
- }
misc_deregister(&rc32434_wdt_miscdev);
-
iounmap(wdt_reg);
-
return 0;
}
static struct platform_driver rc32434_wdt = {
.probe = rc32434_wdt_probe,
- .remove = rc32434_wdt_remove,
- .driver = {
+ .remove = __devexit_p(rc32434_wdt_remove),
+ .driver = {
.name = "rc32434_wdt",
}
};
# Do not add any filesystems before this line
obj-$(CONFIG_REISERFS_FS) += reiserfs/
obj-$(CONFIG_EXT3_FS) += ext3/ # Before ext2 so root fs can be ext3
-obj-$(CONFIG_EXT4_FS) += ext4/ # Before ext2 so root fs can be ext4
+obj-$(CONFIG_EXT2_FS) += ext2/
+# We place ext4 after ext2 so plain ext2 root fs's are mounted using ext2
+# unless explicitly requested by rootfstype
+obj-$(CONFIG_EXT4_FS) += ext4/
obj-$(CONFIG_JBD) += jbd/
obj-$(CONFIG_JBD2) += jbd2/
-obj-$(CONFIG_EXT2_FS) += ext2/
obj-$(CONFIG_CRAMFS) += cramfs/
obj-$(CONFIG_SQUASHFS) += squashfs/
obj-y += ramfs/
if (*cow_ret == buf)
unlock_orig = 1;
- WARN_ON(!btrfs_tree_locked(buf));
+ btrfs_assert_tree_locked(buf);
if (parent)
parent_start = parent->start;
if (slot >= btrfs_header_nritems(upper) - 1)
return 1;
- WARN_ON(!btrfs_tree_locked(path->nodes[1]));
+ btrfs_assert_tree_locked(path->nodes[1]);
right = read_node_slot(root, upper, slot + 1);
btrfs_tree_lock(right);
if (right_nritems == 0)
return 1;
- WARN_ON(!btrfs_tree_locked(path->nodes[1]));
+ btrfs_assert_tree_locked(path->nodes[1]);
left = read_node_slot(root, path->nodes[1], slot - 1);
btrfs_tree_lock(left);
next = read_node_slot(root, c, slot);
if (!path->skip_locking) {
- WARN_ON(!btrfs_tree_locked(c));
+ btrfs_assert_tree_locked(c);
btrfs_tree_lock(next);
btrfs_set_lock_blocking(next);
}
reada_for_search(root, path, level, slot, 0);
next = read_node_slot(root, next, 0);
if (!path->skip_locking) {
- WARN_ON(!btrfs_tree_locked(path->nodes[level]));
+ btrfs_assert_tree_locked(path->nodes[level]);
btrfs_tree_lock(next);
btrfs_set_lock_blocking(next);
}
struct inode *btree_inode = root->fs_info->btree_inode;
if (btrfs_header_generation(buf) ==
root->fs_info->running_transaction->transid) {
- WARN_ON(!btrfs_tree_locked(buf));
+ btrfs_assert_tree_locked(buf);
/* ugh, clear_extent_buffer_dirty can be expensive */
btrfs_set_lock_blocking(buf);
btrfs_set_lock_blocking(buf);
- WARN_ON(!btrfs_tree_locked(buf));
+ btrfs_assert_tree_locked(buf);
if (transid != root->fs_info->generation) {
printk(KERN_CRIT "btrfs transid mismatch buffer %llu, "
"found %llu running %llu\n",
path = btrfs_alloc_path();
BUG_ON(!path);
- BUG_ON(!btrfs_tree_locked(parent));
+ btrfs_assert_tree_locked(parent);
parent_level = btrfs_header_level(parent);
extent_buffer_get(parent);
path->nodes[parent_level] = parent;
path->slots[parent_level] = btrfs_header_nritems(parent);
- BUG_ON(!btrfs_tree_locked(node));
+ btrfs_assert_tree_locked(node);
level = btrfs_header_level(node);
extent_buffer_get(node);
path->nodes[level] = node;
return 0;
}
-int btrfs_tree_locked(struct extent_buffer *eb)
+void btrfs_assert_tree_locked(struct extent_buffer *eb)
{
- return test_bit(EXTENT_BUFFER_BLOCKING, &eb->bflags) ||
- spin_is_locked(&eb->lock);
+ if (!test_bit(EXTENT_BUFFER_BLOCKING, &eb->bflags))
+ assert_spin_locked(&eb->lock);
}
int btrfs_tree_lock(struct extent_buffer *eb);
int btrfs_tree_unlock(struct extent_buffer *eb);
-int btrfs_tree_locked(struct extent_buffer *eb);
int btrfs_try_tree_lock(struct extent_buffer *eb);
int btrfs_try_spin_lock(struct extent_buffer *eb);
void btrfs_set_lock_blocking(struct extent_buffer *eb);
void btrfs_clear_lock_blocking(struct extent_buffer *eb);
+void btrfs_assert_tree_locked(struct extent_buffer *eb);
#endif
fsi->ptmx_dentry = dentry;
rc = 0;
-
- printk(KERN_DEBUG "Created ptmx node in devpts ino %lu\n",
- inode->i_ino);
out:
mutex_unlock(&root->d_inode->i_mutex);
return rc;
struct pts_fs_info *fsi;
struct pts_mount_opts *opts;
- printk(KERN_NOTICE "devpts: newinstance mount\n");
-
err = get_sb_nodev(fs_type, flags, data, devpts_fill_super, mnt);
if (err)
return err;
*/
int ext4_should_retry_alloc(struct super_block *sb, int *retries)
{
- if (!ext4_has_free_blocks(EXT4_SB(sb), 1) || (*retries)++ > 3)
+ if (!ext4_has_free_blocks(EXT4_SB(sb), 1) ||
+ (*retries)++ > 3 ||
+ !EXT4_SB(sb)->s_journal)
return 0;
jbd_debug(1, "%s: retrying operation after ENOSPC\n", sb->s_id);
struct ext4_group_desc *gdp;
struct ext4_super_block *es;
struct ext4_sb_info *sbi;
- int fatal = 0, err, count;
+ int fatal = 0, err, count, cleared;
ext4_group_t flex_group;
if (atomic_read(&inode->i_count) > 1) {
goto error_return;
/* Ok, now we can actually update the inode bitmaps.. */
- if (!ext4_clear_bit_atomic(sb_bgl_lock(sbi, block_group),
- bit, bitmap_bh->b_data))
+ spin_lock(sb_bgl_lock(sbi, block_group));
+ cleared = ext4_clear_bit(bit, bitmap_bh->b_data);
+ spin_unlock(sb_bgl_lock(sbi, block_group));
+ if (!cleared)
ext4_error(sb, "ext4_free_inode",
"bit already cleared for inode %lu", ino);
else {
ext4_journal_stop(handle);
- if (mpd.retval == -ENOSPC) {
+ if ((mpd.retval == -ENOSPC) && sbi->s_journal) {
/* commit the transaction which would
* free blocks released in the transaction
* and try again
/* Journal blocked and flushed, clear needs_recovery flag. */
EXT4_CLEAR_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
- ext4_commit_super(sb, EXT4_SB(sb)->s_es, 1);
error = ext4_commit_super(sb, EXT4_SB(sb)->s_es, 1);
if (error)
goto out;
* generated a larger block - this does occasionally happen with zlib).
*/
int squashfs_read_data(struct super_block *sb, void **buffer, u64 index,
- int length, u64 *next_index, int srclength)
+ int length, u64 *next_index, int srclength, int pages)
{
struct squashfs_sb_info *msblk = sb->s_fs_info;
struct buffer_head **bh;
}
if (msblk->stream.avail_out == 0) {
+ if (page == pages) {
+ ERROR("zlib_inflate tried to "
+ "decompress too much data, "
+ "expected %d bytes. Zlib "
+ "data probably corrupt\n",
+ srclength);
+ goto release_mutex;
+ }
msblk->stream.next_out = buffer[page++];
msblk->stream.avail_out = PAGE_CACHE_SIZE;
}
put_bh(bh[k]);
read_failure:
- ERROR("sb_bread failed reading block 0x%llx\n", cur_index);
+ ERROR("squashfs_read_data failed to read block 0x%llx\n",
+ (unsigned long long) index);
kfree(bh);
return -EIO;
}
entry->length = squashfs_read_data(sb, entry->data,
block, length, &entry->next_index,
- cache->block_size);
+ cache->block_size, cache->pages);
spin_lock(&cache->lock);
for (i = 0; i < pages; i++, buffer += PAGE_CACHE_SIZE)
data[i] = buffer;
res = squashfs_read_data(sb, data, block, length |
- SQUASHFS_COMPRESSED_BIT_BLOCK, NULL, length);
+ SQUASHFS_COMPRESSED_BIT_BLOCK, NULL, length, pages);
kfree(data);
return res;
}
type = le16_to_cpu(sqshb_ino->inode_type);
switch (type) {
case SQUASHFS_REG_TYPE: {
- unsigned int frag_offset, frag_size, frag;
+ unsigned int frag_offset, frag;
+ int frag_size;
u64 frag_blk;
struct squashfs_reg_inode *sqsh_ino = &squashfs_ino.reg;
break;
}
case SQUASHFS_LREG_TYPE: {
- unsigned int frag_offset, frag_size, frag;
+ unsigned int frag_offset, frag;
+ int frag_size;
u64 frag_blk;
struct squashfs_lreg_inode *sqsh_ino = &squashfs_ino.lreg;
/* block.c */
extern int squashfs_read_data(struct super_block *, void **, u64, int, u64 *,
- int);
+ int, int);
/* cache.c */
extern struct squashfs_cache *squashfs_cache_init(char *, int, int);
return err;
}
- printk(KERN_INFO "squashfs: version 4.0 (2009/01/03) "
+ printk(KERN_INFO "squashfs: version 4.0 (2009/01/31) "
"Phillip Lougher\n");
return 0;
header-y += cgroupstats.h
header-y += cramfs_fs.h
header-y += cycx_cfm.h
+header-y += dcbnl.h
header-y += dlmconstants.h
header-y += dlm_device.h
header-y += dlm_netlink.h
ATA_ID_DLF = 128,
ATA_ID_CSFO = 129,
ATA_ID_CFA_POWER = 160,
+ ATA_ID_CFA_KEY_MGMT = 162,
+ ATA_ID_CFA_MODES = 163,
ATA_ID_ROT_SPEED = 217,
ATA_ID_PIO4 = (1 << 1),
#define BOOTMEM_DEFAULT 0
#define BOOTMEM_EXCLUSIVE (1<<0)
+extern int reserve_bootmem(unsigned long addr,
+ unsigned long size,
+ int flags);
extern int reserve_bootmem_node(pg_data_t *pgdat,
- unsigned long physaddr,
- unsigned long size,
- int flags);
-#ifndef CONFIG_HAVE_ARCH_BOOTMEM_NODE
-extern int reserve_bootmem(unsigned long addr, unsigned long size, int flags);
-#endif
+ unsigned long physaddr,
+ unsigned long size,
+ int flags);
-extern void *__alloc_bootmem_nopanic(unsigned long size,
+extern void *__alloc_bootmem(unsigned long size,
unsigned long align,
unsigned long goal);
-extern void *__alloc_bootmem(unsigned long size,
+extern void *__alloc_bootmem_nopanic(unsigned long size,
unsigned long align,
unsigned long goal);
-extern void *__alloc_bootmem_low(unsigned long size,
- unsigned long align,
- unsigned long goal);
extern void *__alloc_bootmem_node(pg_data_t *pgdat,
unsigned long size,
unsigned long align,
unsigned long size,
unsigned long align,
unsigned long goal);
+extern void *__alloc_bootmem_low(unsigned long size,
+ unsigned long align,
+ unsigned long goal);
extern void *__alloc_bootmem_low_node(pg_data_t *pgdat,
unsigned long size,
unsigned long align,
unsigned long goal);
-#ifndef CONFIG_HAVE_ARCH_BOOTMEM_NODE
+
#define alloc_bootmem(x) \
__alloc_bootmem(x, SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS))
#define alloc_bootmem_nopanic(x) \
__alloc_bootmem_nopanic(x, SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS))
-#define alloc_bootmem_low(x) \
- __alloc_bootmem_low(x, SMP_CACHE_BYTES, 0)
#define alloc_bootmem_pages(x) \
__alloc_bootmem(x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS))
#define alloc_bootmem_pages_nopanic(x) \
__alloc_bootmem_nopanic(x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS))
-#define alloc_bootmem_low_pages(x) \
- __alloc_bootmem_low(x, PAGE_SIZE, 0)
#define alloc_bootmem_node(pgdat, x) \
__alloc_bootmem_node(pgdat, x, SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS))
#define alloc_bootmem_pages_node(pgdat, x) \
__alloc_bootmem_node(pgdat, x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS))
+#define alloc_bootmem_pages_node_nopanic(pgdat, x) \
+ __alloc_bootmem_node_nopanic(pgdat, x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS))
+
+#define alloc_bootmem_low(x) \
+ __alloc_bootmem_low(x, SMP_CACHE_BYTES, 0)
+#define alloc_bootmem_low_pages(x) \
+ __alloc_bootmem_low(x, PAGE_SIZE, 0)
#define alloc_bootmem_low_pages_node(pgdat, x) \
__alloc_bootmem_low_node(pgdat, x, PAGE_SIZE, 0)
-#endif /* !CONFIG_HAVE_ARCH_BOOTMEM_NODE */
extern int reserve_bootmem_generic(unsigned long addr, unsigned long size,
int flags);
int (*suspend) (struct cpufreq_policy *policy, pm_message_t pmsg);
int (*resume) (struct cpufreq_policy *policy);
struct freq_attr **attr;
- bool hide_interface;
};
/* flags */
#ifndef __LINUX_DCBNL_H__
#define __LINUX_DCBNL_H__
+#include <linux/types.h>
+
#define DCB_PROTO_VERSION 1
struct dcbmsg {
- unsigned char dcb_family;
+ __u8 dcb_family;
__u8 cmd;
__u16 dcb_pad;
};
--- /dev/null
+#ifndef DECOMPRESS_BUNZIP2_H
+#define DECOMPRESS_BUNZIP2_H
+
+int bunzip2(unsigned char *inbuf, int len,
+ int(*fill)(void*, unsigned int),
+ int(*flush)(void*, unsigned int),
+ unsigned char *output,
+ int *pos,
+ void(*error)(char *x));
+#endif
--- /dev/null
+#ifndef DECOMPRESS_GENERIC_H
+#define DECOMPRESS_GENERIC_H
+
+/* Minimal chunksize to be read.
+ *Bzip2 prefers at least 4096
+ *Lzma prefers 0x10000 */
+#define COMPR_IOBUF_SIZE 4096
+
+typedef int (*decompress_fn) (unsigned char *inbuf, int len,
+ int(*fill)(void*, unsigned int),
+ int(*writebb)(void*, unsigned int),
+ unsigned char *output,
+ int *posp,
+ void(*error)(char *x));
+
+/* inbuf - input buffer
+ *len - len of pre-read data in inbuf
+ *fill - function to fill inbuf if empty
+ *writebb - function to write out outbug
+ *posp - if non-null, input position (number of bytes read) will be
+ * returned here
+ *
+ *If len != 0, the inbuf is initialized (with as much data), and fill
+ *should not be called
+ *If len = 0, the inbuf is allocated, but empty. Its size is IOBUF_SIZE
+ *fill should be called (repeatedly...) to read data, at most IOBUF_SIZE
+ */
+
+/* Utility routine to detect the decompression method */
+decompress_fn decompress_method(const unsigned char *inbuf, int len,
+ const char **name);
+
+#endif
--- /dev/null
+#ifndef INFLATE_H
+#define INFLATE_H
+
+/* Other housekeeping constants */
+#define INBUFSIZ 4096
+
+int gunzip(unsigned char *inbuf, int len,
+ int(*fill)(void*, unsigned int),
+ int(*flush)(void*, unsigned int),
+ unsigned char *output,
+ int *pos,
+ void(*error_fn)(char *x));
+#endif
--- /dev/null
+/*
+ * linux/compr_mm.h
+ *
+ * Memory management for pre-boot and ramdisk uncompressors
+ *
+ * Authors: Alain Knaff <alain@knaff.lu>
+ *
+ */
+
+#ifndef DECOMPR_MM_H
+#define DECOMPR_MM_H
+
+#ifdef STATIC
+
+/* Code active when included from pre-boot environment: */
+
+/* A trivial malloc implementation, adapted from
+ * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
+ */
+static unsigned long malloc_ptr;
+static int malloc_count;
+
+static void *malloc(int size)
+{
+ void *p;
+
+ if (size < 0)
+ error("Malloc error");
+ if (!malloc_ptr)
+ malloc_ptr = free_mem_ptr;
+
+ malloc_ptr = (malloc_ptr + 3) & ~3; /* Align */
+
+ p = (void *)malloc_ptr;
+ malloc_ptr += size;
+
+ if (free_mem_end_ptr && malloc_ptr >= free_mem_end_ptr)
+ error("Out of memory");
+
+ malloc_count++;
+ return p;
+}
+
+static void free(void *where)
+{
+ malloc_count--;
+ if (!malloc_count)
+ malloc_ptr = free_mem_ptr;
+}
+
+#define large_malloc(a) malloc(a)
+#define large_free(a) free(a)
+
+#define set_error_fn(x)
+
+#define INIT
+
+#else /* STATIC */
+
+/* Code active when compiled standalone for use when loading ramdisk: */
+
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/string.h>
+#include <linux/vmalloc.h>
+
+/* Use defines rather than static inline in order to avoid spurious
+ * warnings when not needed (indeed large_malloc / large_free are not
+ * needed by inflate */
+
+#define malloc(a) kmalloc(a, GFP_KERNEL)
+#define free(a) kfree(a)
+
+#define large_malloc(a) vmalloc(a)
+#define large_free(a) vfree(a)
+
+static void(*error)(char *m);
+#define set_error_fn(x) error = x;
+
+#define INIT __init
+#define STATIC
+
+#include <linux/init.h>
+
+#endif /* STATIC */
+
+#endif /* DECOMPR_MM_H */
--- /dev/null
+#ifndef DECOMPRESS_UNLZMA_H
+#define DECOMPRESS_UNLZMA_H
+
+int unlzma(unsigned char *, int,
+ int(*fill)(void*, unsigned int),
+ int(*flush)(void*, unsigned int),
+ unsigned char *output,
+ int *posp,
+ void(*error)(char *x)
+ );
+
+#endif
/**
* struct dma_chan_percpu - the per-CPU part of struct dma_chan
- * @refcount: local_t used for open-coded "bigref" counting
* @memcpy_count: transaction counter
* @bytes_transferred: byte counter
*/
* @cookie: last cookie value returned to client
* @chan_id: channel ID for sysfs
* @dev: class device for sysfs
- * @refcount: kref, used in "bigref" slow-mode
- * @slow_ref: indicates that the DMA channel is free
- * @rcu: the DMA channel's RCU head
* @device_node: used to add this to the device chan list
* @local: per-cpu pointer to a struct dma_chan_percpu
* @client-count: how many clients are using this channel
* @global_node: list_head for global dma_device_list
* @cap_mask: one or more dma_capability flags
* @max_xor: maximum number of xor sources, 0 if no capability
- * @refcount: reference count
- * @done: IO completion struct
* @dev_id: unique device ID
* @dev: struct device reference for dma mapping api
* @device_alloc_chan_resources: allocate resources and return the
* @device_prep_dma_interrupt: prepares an end of chain interrupt operation
* @device_prep_slave_sg: prepares a slave dma operation
* @device_terminate_all: terminate all pending operations
+ * @device_is_tx_complete: poll for transaction completion
* @device_issue_pending: push pending transactions to hardware
*/
struct dma_device {
unsigned short words69_70[2]; /* reserved words 69-70
* future command overlap and queuing
*/
- /* HDIO_GET_IDENTITY currently returns only words 0 through 70 */
unsigned short words71_74[4]; /* reserved words 71-74
* for IDENTIFY PACKET DEVICE command
*/
unsigned int n_ports;
struct device *dev[2];
unsigned int (*init_chipset)(struct pci_dev *);
+ irq_handler_t irq_handler;
unsigned long host_flags;
void *host_priv;
ide_hwif_t *cur_port; /* for hosts requiring serialization */
static inline void *
io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset)
{
+ resource_size_t phys_addr;
+
BUG_ON(offset >= mapping->size);
- resource_size_t phys_addr = mapping->base + offset;
+ phys_addr = mapping->base + offset;
+
return ioremap_wc(phys_addr, PAGE_SIZE);
}
* advised to wait only for the following duration before
* doing SRST.
*/
- ATA_TMOUT_PMP_SRST_WAIT = 1000,
+ ATA_TMOUT_PMP_SRST_WAIT = 5000,
/* ATA bus states */
BUS_UNKNOWN = 0,
unsigned long flags; /* ATA_QCFLAG_xxx */
unsigned int tag;
unsigned int n_elem;
+ unsigned int orig_n_elem;
int dma_dir;
acpi_handle acpi_handle;
struct ata_acpi_gtm __acpi_init_gtm; /* use ata_acpi_init_gtm() */
#endif
- u8 sector_buf[ATA_SECT_SIZE]; /* owned by EH */
+ /* owned by EH */
+ u8 sector_buf[ATA_SECT_SIZE] ____cacheline_aligned;
};
/* The following initializer overrides a method to NULL whether one of
extern int register_netdevice_notifier(struct notifier_block *nb);
extern int unregister_netdevice_notifier(struct notifier_block *nb);
extern int init_dummy_netdev(struct net_device *dev);
+extern void netdev_resync_ops(struct net_device *dev);
extern int call_netdevice_notifiers(unsigned long val, struct net_device *dev);
extern struct net_device *dev_get_by_index(struct net *net, int ifindex);
#define _XT_NFLOG_TARGET
#define XT_NFLOG_DEFAULT_GROUP 0x1
-#define XT_NFLOG_DEFAULT_THRESHOLD 1
+#define XT_NFLOG_DEFAULT_THRESHOLD 0
#define XT_NFLOG_MASK 0x0
#include <linux/slab.h> /* For kmalloc() */
#include <linux/smp.h>
#include <linux/cpumask.h>
+#include <linux/pfn.h>
#include <asm/percpu.h>
#define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var)
#define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var)
-/* Enough to cover all DEFINE_PER_CPUs in kernel, including modules. */
-#ifndef PERCPU_ENOUGH_ROOM
+/* enough to cover all DEFINE_PER_CPUs in modules */
#ifdef CONFIG_MODULES
-#define PERCPU_MODULE_RESERVE 8192
+#define PERCPU_MODULE_RESERVE (8 << 10)
#else
-#define PERCPU_MODULE_RESERVE 0
+#define PERCPU_MODULE_RESERVE 0
#endif
+#ifndef PERCPU_ENOUGH_ROOM
#define PERCPU_ENOUGH_ROOM \
- (__per_cpu_end - __per_cpu_start + PERCPU_MODULE_RESERVE)
-#endif /* PERCPU_ENOUGH_ROOM */
+ (ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \
+ PERCPU_MODULE_RESERVE)
+#endif
/*
* Must be an lvalue. Since @var must be a simple identifier,
#ifdef CONFIG_SMP
+#ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+
+/* minimum unit size, also is the maximum supported allocation size */
+#define PCPU_MIN_UNIT_SIZE PFN_ALIGN(64 << 10)
+
+/*
+ * PERCPU_DYNAMIC_RESERVE indicates the amount of free area to piggy
+ * back on the first chunk for dynamic percpu allocation if arch is
+ * manually allocating and mapping it for faster access (as a part of
+ * large page mapping for example).
+ *
+ * The following values give between one and two pages of free space
+ * after typical minimal boot (2-way SMP, single disk and NIC) with
+ * both defconfig and a distro config on x86_64 and 32. More
+ * intelligent way to determine this would be nice.
+ */
+#if BITS_PER_LONG > 32
+#define PERCPU_DYNAMIC_RESERVE (20 << 10)
+#else
+#define PERCPU_DYNAMIC_RESERVE (12 << 10)
+#endif
+
+extern void *pcpu_base_addr;
+
+typedef struct page * (*pcpu_get_page_fn_t)(unsigned int cpu, int pageno);
+typedef void (*pcpu_populate_pte_fn_t)(unsigned long addr);
+
+extern size_t __init pcpu_setup_first_chunk(pcpu_get_page_fn_t get_page_fn,
+ size_t static_size, size_t reserved_size,
+ ssize_t unit_size, ssize_t dyn_size,
+ void *base_addr,
+ pcpu_populate_pte_fn_t populate_pte_fn);
+
+/*
+ * Use this to get to a cpu's version of the per-cpu object
+ * dynamically allocated. Non-atomic access to the current CPU's
+ * version should probably be combined with get_cpu()/put_cpu().
+ */
+#define per_cpu_ptr(ptr, cpu) SHIFT_PERCPU_PTR((ptr), per_cpu_offset((cpu)))
+
+extern void *__alloc_reserved_percpu(size_t size, size_t align);
+
+#else /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+
struct percpu_data {
void *ptrs[1];
};
#define __percpu_disguise(pdata) (struct percpu_data *)~(unsigned long)(pdata)
-/*
- * Use this to get to a cpu's version of the per-cpu object dynamically
- * allocated. Non-atomic access to the current CPU's version should
- * probably be combined with get_cpu()/put_cpu().
- */
-#define percpu_ptr(ptr, cpu) \
-({ \
- struct percpu_data *__p = __percpu_disguise(ptr); \
- (__typeof__(ptr))__p->ptrs[(cpu)]; \
+
+#define per_cpu_ptr(ptr, cpu) \
+({ \
+ struct percpu_data *__p = __percpu_disguise(ptr); \
+ (__typeof__(ptr))__p->ptrs[(cpu)]; \
})
-extern void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask);
-extern void percpu_free(void *__pdata);
+#endif /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+
+extern void *__alloc_percpu(size_t size, size_t align);
+extern void free_percpu(void *__pdata);
#else /* CONFIG_SMP */
-#define percpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); })
+#define per_cpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); })
-static __always_inline void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask)
+static inline void *__alloc_percpu(size_t size, size_t align)
{
- return kzalloc(size, gfp);
+ /*
+ * Can't easily make larger alignment work with kmalloc. WARN
+ * on it. Larger alignment should only be used for module
+ * percpu sections on SMP for which this path isn't used.
+ */
+ WARN_ON_ONCE(align > SMP_CACHE_BYTES);
+ return kzalloc(size, GFP_KERNEL);
}
-static inline void percpu_free(void *__pdata)
+static inline void free_percpu(void *p)
{
- kfree(__pdata);
+ kfree(p);
}
#endif /* CONFIG_SMP */
-#define percpu_alloc_mask(size, gfp, mask) \
- __percpu_alloc_mask((size), (gfp), &(mask))
-
-#define percpu_alloc(size, gfp) percpu_alloc_mask((size), (gfp), cpu_online_map)
-
-/* (legacy) interface for use without CPU hotplug handling */
-
-#define __alloc_percpu(size) percpu_alloc_mask((size), GFP_KERNEL, \
- cpu_possible_map)
-#define alloc_percpu(type) (type *)__alloc_percpu(sizeof(type))
-#define free_percpu(ptr) percpu_free((ptr))
-#define per_cpu_ptr(ptr, cpu) percpu_ptr((ptr), (cpu))
+#define alloc_percpu(type) (type *)__alloc_percpu(sizeof(type), \
+ __alignof__(type))
#endif /* __LINUX_PERCPU_H */
#define rcu_enter_nohz() do { } while (0)
#define rcu_exit_nohz() do { } while (0)
+/* A context switch is a grace period for rcuclassic. */
+static inline int rcu_blocking_is_gp(void)
+{
+ return num_online_cpus() == 1;
+}
+
#endif /* __LINUX_RCUCLASSIC_H */
void (*func)(struct rcu_head *head);
};
+/* Internal to kernel, but needed by rcupreempt.h. */
+extern int rcu_scheduler_active;
+
#if defined(CONFIG_CLASSIC_RCU)
#include <linux/rcuclassic.h>
#elif defined(CONFIG_TREE_RCU)
/* Internal to kernel */
extern void rcu_init(void);
+extern void rcu_scheduler_starting(void);
extern int rcu_needs_cpu(int cpu);
#endif /* __LINUX_RCUPDATE_H */
#define rcu_exit_nohz() do { } while (0)
#endif /* CONFIG_NO_HZ */
+/*
+ * A context switch is a grace period for rcupreempt synchronize_rcu()
+ * only during early boot, before the scheduler has been initialized.
+ * So, how the heck do we get a context switch? Well, if the caller
+ * invokes synchronize_rcu(), they are willing to accept a context
+ * switch, so we simply pretend that one happened.
+ *
+ * After boot, there might be a blocked or preempted task in an RCU
+ * read-side critical section, so we cannot then take the fastpath.
+ */
+static inline int rcu_blocking_is_gp(void)
+{
+ return num_online_cpus() == 1 && !rcu_scheduler_active;
+}
+
#endif /* __LINUX_RCUPREEMPT_H */
}
#endif /* CONFIG_NO_HZ */
+/* A context switch is a grace period for rcutree. */
+static inline int rcu_blocking_is_gp(void)
+{
+ return num_online_cpus() == 1;
+}
+
#endif /* __LINUX_RCUTREE_H */
extern int sched_group_set_rt_period(struct task_group *tg,
long rt_period_us);
extern long sched_group_rt_period(struct task_group *tg);
+extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
#endif
#endif
+extern int task_can_switch_user(struct user_struct *up,
+ struct task_struct *tsk);
+
#ifdef CONFIG_TASK_XACCT
static inline void add_rchar(struct task_struct *tsk, ssize_t amt)
{
#define SERIO_FUJITSU 0x35
#define SERIO_ZHENHUA 0x36
#define SERIO_INEXIO 0x37
-#define SERIO_TOUCHIT213 0x37
+#define SERIO_TOUCHIT213 0x38
#define SERIO_W8001 0x39
#endif
#ifndef ARCH_HAS_NOCACHE_UACCESS
static inline unsigned long __copy_from_user_inatomic_nocache(void *to,
- const void __user *from, unsigned long n, unsigned long total)
+ const void __user *from, unsigned long n)
{
return __copy_from_user_inatomic(to, from, n);
}
static inline unsigned long __copy_from_user_nocache(void *to,
- const void __user *from, unsigned long n, unsigned long total)
+ const void __user *from, unsigned long n)
{
return __copy_from_user(to, from, n);
}
extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
struct page ***pages);
+extern int map_kernel_range_noflush(unsigned long start, unsigned long size,
+ pgprot_t prot, struct page **pages);
+extern void unmap_kernel_range_noflush(unsigned long addr, unsigned long size);
extern void unmap_kernel_range(unsigned long addr, unsigned long size);
/* Allocate/destroy a 'vmalloc' VM area. */
*/
extern rwlock_t vmlist_lock;
extern struct vm_struct *vmlist;
+extern __init void vm_area_register_early(struct vm_struct *vm, size_t align);
#endif /* _LINUX_VMALLOC_H */
#ifdef CONFIG_NET_NS
extern void __put_net(struct net *net);
-static inline int net_alive(struct net *net)
-{
- return net && atomic_read(&net->count);
-}
-
static inline struct net *get_net(struct net *net)
{
atomic_inc(&net->count);
}
#else
-static inline int net_alive(struct net *net)
-{
- return 1;
-}
-
static inline struct net *get_net(struct net *net)
{
return net;
void (*exit)(struct net *net);
};
+/*
+ * Use these carefully. If you implement a network device and it
+ * needs per network namespace operations use device pernet operations,
+ * otherwise use pernet subsys operations.
+ *
+ * This is critically important. Most of the network code cleanup
+ * runs with the assumption that dev_remove_pack has been called so no
+ * new packets will arrive during and after the cleanup functions have
+ * been called. dev_remove_pack is not per namespace so instead the
+ * guarantee of no more packets arriving in a network namespace is
+ * provided by ensuring that all network devices and all sockets have
+ * left the network namespace before the cleanup methods are called.
+ *
+ * For the longest time the ipv4 icmp code was registered as a pernet
+ * device which caused kernel oops, and panics during network
+ * namespace cleanup. So please don't get this wrong.
+ */
extern int register_pernet_subsys(struct pernet_operations *);
extern void unregister_pernet_subsys(struct pernet_operations *);
extern int register_pernet_gen_subsys(int *id, struct pernet_operations *);
struct nf_conn *ct = (struct nf_conn *)skb->nfct;
int ret = NF_ACCEPT;
- if (ct) {
+ if (ct && ct != &nf_conntrack_untracked) {
if (!nf_ct_is_confirmed(ct) && !nf_ct_is_dying(ct))
ret = __nf_conntrack_confirm(skb);
nf_ct_deliver_cached_events(ct);
which is done within the script "scripts/setlocalversion".)
+config HAVE_KERNEL_GZIP
+ bool
+
+config HAVE_KERNEL_BZIP2
+ bool
+
+config HAVE_KERNEL_LZMA
+ bool
+
+choice
+ prompt "Kernel compression mode"
+ default KERNEL_GZIP
+ depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA
+ help
+ The linux kernel is a kind of self-extracting executable.
+ Several compression algorithms are available, which differ
+ in efficiency, compression and decompression speed.
+ Compression speed is only relevant when building a kernel.
+ Decompression speed is relevant at each boot.
+
+ If you have any problems with bzip2 or lzma compressed
+ kernels, mail me (Alain Knaff) <alain@knaff.lu>. (An older
+ version of this functionality (bzip2 only), for 2.4, was
+ supplied by Christian Ludwig)
+
+ High compression options are mostly useful for users, who
+ are low on disk space (embedded systems), but for whom ram
+ size matters less.
+
+ If in doubt, select 'gzip'
+
+config KERNEL_GZIP
+ bool "Gzip"
+ depends on HAVE_KERNEL_GZIP
+ help
+ The old and tried gzip compression. Its compression ratio is
+ the poorest among the 3 choices; however its speed (both
+ compression and decompression) is the fastest.
+
+config KERNEL_BZIP2
+ bool "Bzip2"
+ depends on HAVE_KERNEL_BZIP2
+ help
+ Its compression ratio and speed is intermediate.
+ Decompression speed is slowest among the three. The kernel
+ size is about 10% smaller with bzip2, in comparison to gzip.
+ Bzip2 uses a large amount of memory. For modern kernels you
+ will need at least 8MB RAM or more for booting.
+
+config KERNEL_LZMA
+ bool "LZMA"
+ depends on HAVE_KERNEL_LZMA
+ help
+ The most recent compression algorithm.
+ Its ratio is best, decompression speed is between the other
+ two. Compression is slowest. The kernel size is about 33%
+ smaller with LZMA in comparison to gzip.
+
+endchoice
+
config SWAP
bool "Support for paging of anonymous memory (swap)"
depends on MMU && BLOCK
config SYSCTL
bool
+config ANON_INODES
+ bool
+
menuconfig EMBEDDED
bool "Configure standard kernel features (for small systems)"
help
This option allows to disable the internal PC-Speaker
support, saving some memory.
-config COMPAT_BRK
- bool "Disable heap randomization"
- default y
- help
- Randomizing heap placement makes heap exploits harder, but it
- also breaks ancient binaries (including anything libc5 based).
- This option changes the bootup default to heap randomization
- disabled, and can be overriden runtime by setting
- /proc/sys/kernel/randomize_va_space to 2.
-
- On non-ancient distros (post-2000 ones) N is usually a safe choice.
-
config BASE_FULL
default y
bool "Enable full-sized data structures for core" if EMBEDDED
support for "fast userspace mutexes". The resulting kernel may not
run glibc-based applications correctly.
-config ANON_INODES
- bool
-
config EPOLL
bool "Enable eventpoll support" if EMBEDDED
default y
SLUB sysfs support. /sys/slab will not exist and there will be
no support for cache validation etc.
+config COMPAT_BRK
+ bool "Disable heap randomization"
+ default y
+ help
+ Randomizing heap placement makes heap exploits harder, but it
+ also breaks ancient binaries (including anything libc5 based).
+ This option changes the bootup default to heap randomization
+ disabled, and can be overriden runtime by setting
+ /proc/sys/kernel/randomize_va_space to 2.
+
+ On non-ancient distros (post-2000 ones) N is usually a safe choice.
+
choice
prompt "Choose SLAB allocator"
default SLUB
#include "do_mounts.h"
#include "../fs/squashfs/squashfs_fs.h"
+#include <linux/decompress/generic.h>
+
+
int __initdata rd_prompt = 1;/* 1 = prompt for RAM disk, 0 = don't prompt */
static int __init prompt_ramdisk(char *str)
}
__setup("ramdisk_start=", ramdisk_start_setup);
-static int __init crd_load(int in_fd, int out_fd);
+static int __init crd_load(int in_fd, int out_fd, decompress_fn deco);
/*
* This routine tries to find a RAM disk image to load, and returns the
* numbers could not be found.
*
* We currently check for the following magic numbers:
- * minix
- * ext2
+ * minix
+ * ext2
* romfs
* cramfs
* squashfs
- * gzip
+ * gzip
*/
-static int __init
-identify_ramdisk_image(int fd, int start_block)
+static int __init
+identify_ramdisk_image(int fd, int start_block, decompress_fn *decompressor)
{
const int size = 512;
struct minix_super_block *minixsb;
struct squashfs_super_block *squashfsb;
int nblocks = -1;
unsigned char *buf;
+ const char *compress_name;
buf = kmalloc(size, GFP_KERNEL);
if (!buf)
memset(buf, 0xe5, size);
/*
- * Read block 0 to test for gzipped kernel
+ * Read block 0 to test for compressed kernel
*/
sys_lseek(fd, start_block * BLOCK_SIZE, 0);
sys_read(fd, buf, size);
- /*
- * If it matches the gzip magic numbers, return 0
- */
- if (buf[0] == 037 && ((buf[1] == 0213) || (buf[1] == 0236))) {
- printk(KERN_NOTICE
- "RAMDISK: Compressed image found at block %d\n",
- start_block);
+ *decompressor = decompress_method(buf, size, &compress_name);
+ if (compress_name) {
+ printk(KERN_NOTICE "RAMDISK: %s image found at block %d\n",
+ compress_name, start_block);
+ if (!*decompressor)
+ printk(KERN_EMERG
+ "RAMDISK: %s decompressor not configured!\n",
+ compress_name);
nblocks = 0;
goto done;
}
printk(KERN_NOTICE
"RAMDISK: Couldn't find valid RAM disk image starting at %d.\n",
start_block);
-
+
done:
sys_lseek(fd, start_block * BLOCK_SIZE, 0);
kfree(buf);
int nblocks, i, disk;
char *buf = NULL;
unsigned short rotate = 0;
+ decompress_fn decompressor = NULL;
#if !defined(CONFIG_S390) && !defined(CONFIG_PPC_ISERIES)
char rotator[4] = { '|' , '/' , '-' , '\\' };
#endif
if (in_fd < 0)
goto noclose_input;
- nblocks = identify_ramdisk_image(in_fd, rd_image_start);
+ nblocks = identify_ramdisk_image(in_fd, rd_image_start, &decompressor);
if (nblocks < 0)
goto done;
if (nblocks == 0) {
- if (crd_load(in_fd, out_fd) == 0)
+ if (crd_load(in_fd, out_fd, decompressor) == 0)
goto successful_load;
goto done;
}
nblocks, rd_blocks);
goto done;
}
-
+
/*
* OK, time to copy in the data
*/
return rd_load_image("/dev/root");
}
-/*
- * gzip declarations
- */
-
-#define OF(args) args
-
-#ifndef memzero
-#define memzero(s, n) memset ((s), 0, (n))
-#endif
-
-typedef unsigned char uch;
-typedef unsigned short ush;
-typedef unsigned long ulg;
-
-#define INBUFSIZ 4096
-#define WSIZE 0x8000 /* window size--must be a power of two, and */
- /* at least 32K for zip's deflate method */
-
-static uch *inbuf;
-static uch *window;
-
-static unsigned insize; /* valid bytes in inbuf */
-static unsigned inptr; /* index of next byte to be processed in inbuf */
-static unsigned outcnt; /* bytes in output buffer */
static int exit_code;
-static int unzip_error;
-static long bytes_out;
+static int decompress_error;
static int crd_infd, crd_outfd;
-#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
-
-/* Diagnostic functions (stubbed out) */
-#define Assert(cond,msg)
-#define Trace(x)
-#define Tracev(x)
-#define Tracevv(x)
-#define Tracec(c,x)
-#define Tracecv(c,x)
-
-#define STATIC static
-#define INIT __init
-
-static int __init fill_inbuf(void);
-static void __init flush_window(void);
-static void __init error(char *m);
-
-#define NO_INFLATE_MALLOC
-
-#include "../lib/inflate.c"
-
-/* ===========================================================================
- * Fill the input buffer. This is called only when the buffer is empty
- * and at least one byte is really needed.
- * Returning -1 does not guarantee that gunzip() will ever return.
- */
-static int __init fill_inbuf(void)
+static int __init compr_fill(void *buf, unsigned int len)
{
- if (exit_code) return -1;
-
- insize = sys_read(crd_infd, inbuf, INBUFSIZ);
- if (insize == 0) {
- error("RAMDISK: ran out of compressed data");
- return -1;
- }
-
- inptr = 1;
-
- return inbuf[0];
+ int r = sys_read(crd_infd, buf, len);
+ if (r < 0)
+ printk(KERN_ERR "RAMDISK: error while reading compressed data");
+ else if (r == 0)
+ printk(KERN_ERR "RAMDISK: EOF while reading compressed data");
+ return r;
}
-/* ===========================================================================
- * Write the output window window[0..outcnt-1] and update crc and bytes_out.
- * (Used for the decompressed data only.)
- */
-static void __init flush_window(void)
+static int __init compr_flush(void *window, unsigned int outcnt)
{
- ulg c = crc; /* temporary variable */
- unsigned n, written;
- uch *in, ch;
-
- written = sys_write(crd_outfd, window, outcnt);
- if (written != outcnt && unzip_error == 0) {
- printk(KERN_ERR "RAMDISK: incomplete write (%d != %d) %ld\n",
- written, outcnt, bytes_out);
- unzip_error = 1;
- }
- in = window;
- for (n = 0; n < outcnt; n++) {
- ch = *in++;
- c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
- }
- crc = c;
- bytes_out += (ulg)outcnt;
- outcnt = 0;
+ int written = sys_write(crd_outfd, window, outcnt);
+ if (written != outcnt) {
+ if (decompress_error == 0)
+ printk(KERN_ERR
+ "RAMDISK: incomplete write (%d != %d)\n",
+ written, outcnt);
+ decompress_error = 1;
+ return -1;
+ }
+ return outcnt;
}
static void __init error(char *x)
{
printk(KERN_ERR "%s\n", x);
exit_code = 1;
- unzip_error = 1;
+ decompress_error = 1;
}
-static int __init crd_load(int in_fd, int out_fd)
+static int __init crd_load(int in_fd, int out_fd, decompress_fn deco)
{
int result;
-
- insize = 0; /* valid bytes in inbuf */
- inptr = 0; /* index of next byte to be processed in inbuf */
- outcnt = 0; /* bytes in output buffer */
- exit_code = 0;
- bytes_out = 0;
- crc = (ulg)0xffffffffL; /* shift register contents */
-
crd_infd = in_fd;
crd_outfd = out_fd;
- inbuf = kmalloc(INBUFSIZ, GFP_KERNEL);
- if (!inbuf) {
- printk(KERN_ERR "RAMDISK: Couldn't allocate gzip buffer\n");
- return -1;
- }
- window = kmalloc(WSIZE, GFP_KERNEL);
- if (!window) {
- printk(KERN_ERR "RAMDISK: Couldn't allocate gzip window\n");
- kfree(inbuf);
- return -1;
- }
- makecrc();
- result = gunzip();
- if (unzip_error)
+ result = deco(NULL, 0, compr_fill, compr_flush, NULL, NULL, error);
+ if (decompress_error)
result = 1;
- kfree(inbuf);
- kfree(window);
return result;
}
return len - count;
}
-static void __init flush_buffer(char *buf, unsigned len)
+static int __init flush_buffer(void *bufv, unsigned len)
{
+ char *buf = (char *) bufv;
int written;
+ int origLen = len;
if (message)
- return;
+ return -1;
while ((written = write_buffer(buf, len)) < len && !message) {
char c = buf[written];
if (c == '0') {
} else
error("junk in compressed archive");
}
+ return origLen;
}
-/*
- * gzip declarations
- */
+static unsigned my_inptr; /* index of next byte to be processed in inbuf */
-#define OF(args) args
-
-#ifndef memzero
-#define memzero(s, n) memset ((s), 0, (n))
-#endif
-
-typedef unsigned char uch;
-typedef unsigned short ush;
-typedef unsigned long ulg;
-
-#define WSIZE 0x8000 /* window size--must be a power of two, and */
- /* at least 32K for zip's deflate method */
-
-static uch *inbuf;
-static uch *window;
-
-static unsigned insize; /* valid bytes in inbuf */
-static unsigned inptr; /* index of next byte to be processed in inbuf */
-static unsigned outcnt; /* bytes in output buffer */
-static long bytes_out;
-
-#define get_byte() (inptr < insize ? inbuf[inptr++] : -1)
-
-/* Diagnostic functions (stubbed out) */
-#define Assert(cond,msg)
-#define Trace(x)
-#define Tracev(x)
-#define Tracevv(x)
-#define Tracec(c,x)
-#define Tracecv(c,x)
-
-#define STATIC static
-#define INIT __init
-
-static void __init flush_window(void);
-static void __init error(char *m);
-
-#define NO_INFLATE_MALLOC
-
-#include "../lib/inflate.c"
-
-/* ===========================================================================
- * Write the output window window[0..outcnt-1] and update crc and bytes_out.
- * (Used for the decompressed data only.)
- */
-static void __init flush_window(void)
-{
- ulg c = crc; /* temporary variable */
- unsigned n;
- uch *in, ch;
-
- flush_buffer(window, outcnt);
- in = window;
- for (n = 0; n < outcnt; n++) {
- ch = *in++;
- c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
- }
- crc = c;
- bytes_out += (ulg)outcnt;
- outcnt = 0;
-}
+#include <linux/decompress/generic.h>
static char * __init unpack_to_rootfs(char *buf, unsigned len, int check_only)
{
int written;
+ decompress_fn decompress;
+ const char *compress_name;
+ static __initdata char msg_buf[64];
+
dry_run = check_only;
header_buf = kmalloc(110, GFP_KERNEL);
symlink_buf = kmalloc(PATH_MAX + N_ALIGN(PATH_MAX) + 1, GFP_KERNEL);
name_buf = kmalloc(N_ALIGN(PATH_MAX), GFP_KERNEL);
- window = kmalloc(WSIZE, GFP_KERNEL);
- if (!window || !header_buf || !symlink_buf || !name_buf)
+
+ if (!header_buf || !symlink_buf || !name_buf)
panic("can't allocate buffers");
+
state = Start;
this_header = 0;
message = NULL;
continue;
}
this_header = 0;
- insize = len;
- inbuf = buf;
- inptr = 0;
- outcnt = 0; /* bytes in output buffer */
- bytes_out = 0;
- crc = (ulg)0xffffffffL; /* shift register contents */
- makecrc();
- gunzip();
+ decompress = decompress_method(buf, len, &compress_name);
+ if (decompress)
+ decompress(buf, len, NULL, flush_buffer, NULL,
+ &my_inptr, error);
+ else if (compress_name) {
+ if (!message) {
+ snprintf(msg_buf, sizeof msg_buf,
+ "compression method %s not configured",
+ compress_name);
+ message = msg_buf;
+ }
+ }
if (state != Reset)
- error("junk in gzipped archive");
- this_header = saved_offset + inptr;
- buf += inptr;
- len -= inptr;
+ error("junk in compressed archive");
+ this_header = saved_offset + my_inptr;
+ buf += my_inptr;
+ len -= my_inptr;
}
dir_utime();
- kfree(window);
kfree(name_buf);
kfree(symlink_buf);
kfree(header_buf);
char *err = unpack_to_rootfs(__initramfs_start,
__initramfs_end - __initramfs_start, 0);
if (err)
- panic(err);
+ panic(err); /* Failed to decompress INTERNAL initramfs */
if (initrd_start) {
#ifdef CONFIG_BLK_DEV_RAM
int fd;
printk(KERN_INFO "Unpacking initramfs...");
err = unpack_to_rootfs((char *)initrd_start,
initrd_end - initrd_start, 0);
- if (err)
- panic(err);
- printk(" done\n");
+ if (err) {
+ printk(" failed!\n");
+ printk(KERN_EMERG "%s\n", err);
+ } else {
+ printk(" done\n");
+ }
free_initrd();
#endif
}
extern void tc_init(void);
#endif
-enum system_states system_state;
+enum system_states system_state __read_mostly;
EXPORT_SYMBOL(system_state);
/*
* at least once to get things moving:
*/
init_idle_bootup_task(current);
+ rcu_scheduler_starting();
preempt_enable_no_resched();
schedule();
preempt_disable();
#endif
clear_all_latency_tracing(p);
- /* Our parent execution domain becomes current domain
- These must match for thread signalling to apply */
- p->parent_exec_id = p->self_exec_id;
-
/* ok, now we should be set up.. */
p->exit_signal = (clone_flags & CLONE_THREAD) ? -1 : (clone_flags & CSIGNAL);
p->pdeath_signal = 0;
set_task_cpu(p, smp_processor_id());
/* CLONE_PARENT re-uses the old parent */
- if (clone_flags & (CLONE_PARENT|CLONE_THREAD))
+ if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) {
p->real_parent = current->real_parent;
- else
+ p->parent_exec_id = current->parent_exec_id;
+ } else {
p->real_parent = current;
+ p->parent_exec_id = current->self_exec_id;
+ }
spin_lock(¤t->sighand->siglock);
#include <linux/tracepoint.h>
#include <linux/ftrace.h>
#include <linux/async.h>
+#include <linux/percpu.h>
#if 0
#define DEBUGP printk
}
#ifdef CONFIG_SMP
+
+#ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+
+static void *percpu_modalloc(unsigned long size, unsigned long align,
+ const char *name)
+{
+ void *ptr;
+
+ if (align > PAGE_SIZE) {
+ printk(KERN_WARNING "%s: per-cpu alignment %li > %li\n",
+ name, align, PAGE_SIZE);
+ align = PAGE_SIZE;
+ }
+
+ ptr = __alloc_reserved_percpu(size, align);
+ if (!ptr)
+ printk(KERN_WARNING
+ "Could not allocate %lu bytes percpu data\n", size);
+ return ptr;
+}
+
+static void percpu_modfree(void *freeme)
+{
+ free_percpu(freeme);
+}
+
+#else /* ... !CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+
/* Number of blocks used and allocated. */
static unsigned int pcpu_num_used, pcpu_num_allocated;
/* Size of each block. -ve means used. */
}
}
-static unsigned int find_pcpusec(Elf_Ehdr *hdr,
- Elf_Shdr *sechdrs,
- const char *secstrings)
-{
- return find_sec(hdr, sechdrs, secstrings, ".data.percpu");
-}
-
-static void percpu_modcopy(void *pcpudest, const void *from, unsigned long size)
-{
- int cpu;
-
- for_each_possible_cpu(cpu)
- memcpy(pcpudest + per_cpu_offset(cpu), from, size);
-}
-
static int percpu_modinit(void)
{
pcpu_num_used = 2;
return 0;
}
__initcall(percpu_modinit);
+
+#endif /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */
+
+static unsigned int find_pcpusec(Elf_Ehdr *hdr,
+ Elf_Shdr *sechdrs,
+ const char *secstrings)
+{
+ return find_sec(hdr, sechdrs, secstrings, ".data.percpu");
+}
+
+static void percpu_modcopy(void *pcpudest, const void *from, unsigned long size)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ memcpy(pcpudest + per_cpu_offset(cpu), from, size);
+}
+
#else /* ... !CONFIG_SMP */
+
static inline void *percpu_modalloc(unsigned long size, unsigned long align,
const char *name)
{
/* pcpusec should be 0, and size of that section should be 0. */
BUG_ON(size != 0);
}
+
#endif /* CONFIG_SMP */
#define MODINFO_ATTR(field) \
void rcu_check_callbacks(int cpu, int user)
{
if (user ||
- (idle_cpu(cpu) && !in_softirq() &&
- hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
+ (idle_cpu(cpu) && rcu_scheduler_active &&
+ !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
/*
* Get here if this CPU took its interrupt from user
#include <linux/cpu.h>
#include <linux/mutex.h>
#include <linux/module.h>
+#include <linux/kernel_stat.h>
enum rcu_barrier {
RCU_BARRIER_STD,
static atomic_t rcu_barrier_cpu_count;
static DEFINE_MUTEX(rcu_barrier_mutex);
static struct completion rcu_barrier_completion;
+int rcu_scheduler_active __read_mostly;
/*
* Awaken the corresponding synchronize_rcu() instance now that a
void synchronize_rcu(void)
{
struct rcu_synchronize rcu;
+
+ if (rcu_blocking_is_gp())
+ return;
+
init_completion(&rcu.completion);
/* Will wake me after RCU finished. */
call_rcu(&rcu.head, wakeme_after_rcu);
__rcu_init();
}
+void rcu_scheduler_starting(void)
+{
+ WARN_ON(num_online_cpus() != 1);
+ WARN_ON(nr_context_switches() > 0);
+ rcu_scheduler_active = 1;
+}
{
struct rcu_synchronize rcu;
+ if (num_online_cpus() == 1)
+ return; /* blocking is gp if only one CPU! */
+
init_completion(&rcu.completion);
/* Will wake me after RCU finished. */
call_rcu_sched(&rcu.head, wakeme_after_rcu);
void rcu_check_callbacks(int cpu, int user)
{
if (user ||
- (idle_cpu(cpu) && !in_softirq() &&
- hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
+ (idle_cpu(cpu) && rcu_scheduler_active &&
+ !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
/*
* Get here if this CPU took its interrupt from user
{
ktime_t now;
- if (rt_bandwidth_enabled() && rt_b->rt_runtime == RUNTIME_INF)
+ if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
return;
if (hrtimer_active(&rt_b->rt_period_timer))
return ret;
}
+
+int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
+{
+ /* Don't accept realtime tasks when there is no way for them to run */
+ if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
+ return 0;
+
+ return 1;
+}
+
#else /* !CONFIG_RT_GROUP_SCHED */
static int sched_rt_global_constraints(void)
{
struct task_struct *tsk)
{
#ifdef CONFIG_RT_GROUP_SCHED
- /* Don't accept realtime tasks when there is no way for them to run */
- if (rt_task(tsk) && cgroup_tg(cgrp)->rt_bandwidth.rt_runtime == 0)
+ if (!sched_rt_can_attach(cgroup_tg(cgrp), tsk))
return -EINVAL;
#else
/* We don't support RT-tasks being in separate groups */
static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu)
{
- u64 *cpuusage = percpu_ptr(ca->cpuusage, cpu);
+ u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
u64 data;
#ifndef CONFIG_64BIT
static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu, u64 val)
{
- u64 *cpuusage = percpu_ptr(ca->cpuusage, cpu);
+ u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
#ifndef CONFIG_64BIT
/*
ca = task_ca(tsk);
for (; ca; ca = ca->parent) {
- u64 *cpuusage = percpu_ptr(ca->cpuusage, cpu);
+ u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
*cpuusage += cputime;
}
}
#include <linux/seccomp.h>
#include <linux/sched.h>
+#include <linux/compat.h>
/* #define SECCOMP_DEBUG 1 */
#define NR_SECCOMP_MODES 1
0, /* null terminated */
};
-#ifdef TIF_32BIT
+#ifdef CONFIG_COMPAT
static int mode1_syscalls_32[] = {
__NR_seccomp_read_32, __NR_seccomp_write_32, __NR_seccomp_exit_32, __NR_seccomp_sigreturn_32,
0, /* null terminated */
switch (mode) {
case 1:
syscall = mode1_syscalls;
-#ifdef TIF_32BIT
- if (test_thread_flag(TIF_32BIT))
+#ifdef CONFIG_COMPAT
+ if (is_compat_task())
syscall = mode1_syscalls_32;
#endif
do {
preempt_enable_no_resched();
cond_resched();
preempt_disable();
+ rcu_qsctr_inc((long)__bind_cpu);
}
preempt_enable();
set_current_state(TASK_INTERRUPTIBLE);
* doesn't hit this CPU until we're ready. */
get_cpu();
for_each_online_cpu(i) {
- sm_work = percpu_ptr(stop_machine_work, i);
+ sm_work = per_cpu_ptr(stop_machine_work, i);
INIT_WORK(sm_work, stop_cpu);
queue_work_on(i, stop_machine_wq, sm_work);
}
abort_creds(new);
return retval;
}
-
+
/*
* change the user struct in a credentials set to match the new UID
*/
if (!new_user)
return -EAGAIN;
+ if (!task_can_switch_user(new_user, current)) {
+ free_uid(new_user);
+ return -EINVAL;
+ }
+
if (atomic_read(&new_user->processes) >=
current->signal->rlim[RLIMIT_NPROC].rlim_cur &&
new_user != INIT_USER) {
goto error;
}
- retval = -EAGAIN;
- if (new->uid != old->uid && set_user(new) < 0)
- goto error;
-
+ if (new->uid != old->uid) {
+ retval = set_user(new);
+ if (retval < 0)
+ goto error;
+ }
if (ruid != (uid_t) -1 ||
(euid != (uid_t) -1 && euid != old->uid))
new->suid = new->euid;
retval = -EPERM;
if (capable(CAP_SETUID)) {
new->suid = new->uid = uid;
- if (uid != old->uid && set_user(new) < 0) {
- retval = -EAGAIN;
- goto error;
+ if (uid != old->uid) {
+ retval = set_user(new);
+ if (retval < 0)
+ goto error;
}
} else if (uid != old->uid && uid != new->suid) {
goto error;
goto error;
}
- retval = -EAGAIN;
if (ruid != (uid_t) -1) {
new->uid = ruid;
- if (ruid != old->uid && set_user(new) < 0)
- goto error;
+ if (ruid != old->uid) {
+ retval = set_user(new);
+ if (retval < 0)
+ goto error;
+ }
}
if (euid != (uid_t) -1)
new->euid = euid;
if (likely(tsk->mm)) {
cputime_t time, dtime;
struct timeval value;
+ unsigned long flags;
u64 delta;
+ local_irq_save(flags);
time = tsk->stime + tsk->utime;
dtime = cputime_sub(time, tsk->acct_timexpd);
jiffies_to_timeval(cputime_to_jiffies(dtime), &value);
delta = delta * USEC_PER_SEC + value.tv_usec;
if (delta == 0)
- return;
+ goto out;
tsk->acct_timexpd = time;
tsk->acct_rss_mem1 += delta * get_mm_rss(tsk->mm);
tsk->acct_vm_mem1 += delta * tsk->mm->total_vm;
+ out:
+ local_irq_restore(flags);
}
}
/* work function to remove sysfs directory for a user and free up
* corresponding structures.
*/
-static void remove_user_sysfs_dir(struct work_struct *w)
+static void cleanup_user_struct(struct work_struct *w)
{
struct user_struct *up = container_of(w, struct user_struct, work);
unsigned long flags;
int remove_user = 0;
- if (up->user_ns != &init_user_ns)
- return;
/* Make uid_hash_remove() + sysfs_remove_file() + kobject_del()
* atomic.
*/
if (!remove_user)
goto done;
- kobject_uevent(&up->kobj, KOBJ_REMOVE);
- kobject_del(&up->kobj);
- kobject_put(&up->kobj);
+ if (up->user_ns == &init_user_ns) {
+ kobject_uevent(&up->kobj, KOBJ_REMOVE);
+ kobject_del(&up->kobj);
+ kobject_put(&up->kobj);
+ }
sched_destroy_user(up);
key_put(up->uid_keyring);
atomic_inc(&up->__count);
spin_unlock_irqrestore(&uidhash_lock, flags);
- INIT_WORK(&up->work, remove_user_sysfs_dir);
+ INIT_WORK(&up->work, cleanup_user_struct);
schedule_work(&up->work);
}
#endif
+#if defined(CONFIG_RT_GROUP_SCHED) && defined(CONFIG_USER_SCHED)
+/*
+ * We need to check if a setuid can take place. This function should be called
+ * before successfully completing the setuid.
+ */
+int task_can_switch_user(struct user_struct *up, struct task_struct *tsk)
+{
+
+ return sched_rt_can_attach(up->tg, tsk);
+
+}
+#else
+int task_can_switch_user(struct user_struct *up, struct task_struct *tsk)
+{
+ return 1;
+}
+#endif
+
/*
* Locate the user_struct for the passed UID. If found, take a ref on it. The
* caller must undo that ref with free_uid().
config LZO_DECOMPRESS
tristate
+#
+# These all provide a common interface (hence the apparent duplication with
+# ZLIB_INFLATE; DECOMPRESS_GZIP is just a wrapper.)
+#
+config DECOMPRESS_GZIP
+ select ZLIB_INFLATE
+ tristate
+
+config DECOMPRESS_BZIP2
+ tristate
+
+config DECOMPRESS_LZMA
+ tristate
+
#
# Generic allocator support is selected if needed
#
rbtree.o radix-tree.o dump_stack.o \
idr.o int_sqrt.o extable.o prio_tree.o \
sha1.o irq_regs.o reciprocal_div.o argv_split.o \
- proportions.o prio_heap.o ratelimit.o show_mem.o is_single_threaded.o
+ proportions.o prio_heap.o ratelimit.o show_mem.o \
+ is_single_threaded.o decompress.o
lib-$(CONFIG_MMU) += ioremap.o
lib-$(CONFIG_SMP) += cpumask.o
obj-$(CONFIG_LZO_COMPRESS) += lzo/
obj-$(CONFIG_LZO_DECOMPRESS) += lzo/
+lib-$(CONFIG_DECOMPRESS_GZIP) += decompress_inflate.o
+lib-$(CONFIG_DECOMPRESS_BZIP2) += decompress_bunzip2.o
+lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
+
obj-$(CONFIG_TEXTSEARCH) += textsearch.o
obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
--- /dev/null
+/*
+ * decompress.c
+ *
+ * Detect the decompression method based on magic number
+ */
+
+#include <linux/decompress/generic.h>
+
+#include <linux/decompress/bunzip2.h>
+#include <linux/decompress/unlzma.h>
+#include <linux/decompress/inflate.h>
+
+#include <linux/types.h>
+#include <linux/string.h>
+
+#ifndef CONFIG_DECOMPRESS_GZIP
+# define gunzip NULL
+#endif
+#ifndef CONFIG_DECOMPRESS_BZIP2
+# define bunzip2 NULL
+#endif
+#ifndef CONFIG_DECOMPRESS_LZMA
+# define unlzma NULL
+#endif
+
+static const struct compress_format {
+ unsigned char magic[2];
+ const char *name;
+ decompress_fn decompressor;
+} compressed_formats[] = {
+ { {037, 0213}, "gzip", gunzip },
+ { {037, 0236}, "gzip", gunzip },
+ { {0x42, 0x5a}, "bzip2", bunzip2 },
+ { {0x5d, 0x00}, "lzma", unlzma },
+ { {0, 0}, NULL, NULL }
+};
+
+decompress_fn decompress_method(const unsigned char *inbuf, int len,
+ const char **name)
+{
+ const struct compress_format *cf;
+
+ if (len < 2)
+ return NULL; /* Need at least this much... */
+
+ for (cf = compressed_formats; cf->name; cf++) {
+ if (!memcmp(inbuf, cf->magic, 2))
+ break;
+
+ }
+ if (name)
+ *name = cf->name;
+ return cf->decompressor;
+}
--- /dev/null
+/* vi: set sw = 4 ts = 4: */
+/* Small bzip2 deflate implementation, by Rob Landley (rob@landley.net).
+
+ Based on bzip2 decompression code by Julian R Seward (jseward@acm.org),
+ which also acknowledges contributions by Mike Burrows, David Wheeler,
+ Peter Fenwick, Alistair Moffat, Radford Neal, Ian H. Witten,
+ Robert Sedgewick, and Jon L. Bentley.
+
+ This code is licensed under the LGPLv2:
+ LGPL (http://www.gnu.org/copyleft/lgpl.html
+*/
+
+/*
+ Size and speed optimizations by Manuel Novoa III (mjn3@codepoet.org).
+
+ More efficient reading of Huffman codes, a streamlined read_bunzip()
+ function, and various other tweaks. In (limited) tests, approximately
+ 20% faster than bzcat on x86 and about 10% faster on arm.
+
+ Note that about 2/3 of the time is spent in read_unzip() reversing
+ the Burrows-Wheeler transformation. Much of that time is delay
+ resulting from cache misses.
+
+ I would ask that anyone benefiting from this work, especially those
+ using it in commercial products, consider making a donation to my local
+ non-profit hospice organization in the name of the woman I loved, who
+ passed away Feb. 12, 2003.
+
+ In memory of Toni W. Hagan
+
+ Hospice of Acadiana, Inc.
+ 2600 Johnston St., Suite 200
+ Lafayette, LA 70503-3240
+
+ Phone (337) 232-1234 or 1-800-738-2226
+ Fax (337) 232-1297
+
+ http://www.hospiceacadiana.com/
+
+ Manuel
+ */
+
+/*
+ Made it fit for running in Linux Kernel by Alain Knaff (alain@knaff.lu)
+*/
+
+
+#ifndef STATIC
+#include <linux/decompress/bunzip2.h>
+#endif /* !STATIC */
+
+#include <linux/decompress/mm.h>
+
+#ifndef INT_MAX
+#define INT_MAX 0x7fffffff
+#endif
+
+/* Constants for Huffman coding */
+#define MAX_GROUPS 6
+#define GROUP_SIZE 50 /* 64 would have been more efficient */
+#define MAX_HUFCODE_BITS 20 /* Longest Huffman code allowed */
+#define MAX_SYMBOLS 258 /* 256 literals + RUNA + RUNB */
+#define SYMBOL_RUNA 0
+#define SYMBOL_RUNB 1
+
+/* Status return values */
+#define RETVAL_OK 0
+#define RETVAL_LAST_BLOCK (-1)
+#define RETVAL_NOT_BZIP_DATA (-2)
+#define RETVAL_UNEXPECTED_INPUT_EOF (-3)
+#define RETVAL_UNEXPECTED_OUTPUT_EOF (-4)
+#define RETVAL_DATA_ERROR (-5)
+#define RETVAL_OUT_OF_MEMORY (-6)
+#define RETVAL_OBSOLETE_INPUT (-7)
+
+/* Other housekeeping constants */
+#define BZIP2_IOBUF_SIZE 4096
+
+/* This is what we know about each Huffman coding group */
+struct group_data {
+ /* We have an extra slot at the end of limit[] for a sentinal value. */
+ int limit[MAX_HUFCODE_BITS+1];
+ int base[MAX_HUFCODE_BITS];
+ int permute[MAX_SYMBOLS];
+ int minLen, maxLen;
+};
+
+/* Structure holding all the housekeeping data, including IO buffers and
+ memory that persists between calls to bunzip */
+struct bunzip_data {
+ /* State for interrupting output loop */
+ int writeCopies, writePos, writeRunCountdown, writeCount, writeCurrent;
+ /* I/O tracking data (file handles, buffers, positions, etc.) */
+ int (*fill)(void*, unsigned int);
+ int inbufCount, inbufPos /*, outbufPos*/;
+ unsigned char *inbuf /*,*outbuf*/;
+ unsigned int inbufBitCount, inbufBits;
+ /* The CRC values stored in the block header and calculated from the
+ data */
+ unsigned int crc32Table[256], headerCRC, totalCRC, writeCRC;
+ /* Intermediate buffer and its size (in bytes) */
+ unsigned int *dbuf, dbufSize;
+ /* These things are a bit too big to go on the stack */
+ unsigned char selectors[32768]; /* nSelectors = 15 bits */
+ struct group_data groups[MAX_GROUPS]; /* Huffman coding tables */
+ int io_error; /* non-zero if we have IO error */
+};
+
+
+/* Return the next nnn bits of input. All reads from the compressed input
+ are done through this function. All reads are big endian */
+static unsigned int INIT get_bits(struct bunzip_data *bd, char bits_wanted)
+{
+ unsigned int bits = 0;
+
+ /* If we need to get more data from the byte buffer, do so.
+ (Loop getting one byte at a time to enforce endianness and avoid
+ unaligned access.) */
+ while (bd->inbufBitCount < bits_wanted) {
+ /* If we need to read more data from file into byte buffer, do
+ so */
+ if (bd->inbufPos == bd->inbufCount) {
+ if (bd->io_error)
+ return 0;
+ bd->inbufCount = bd->fill(bd->inbuf, BZIP2_IOBUF_SIZE);
+ if (bd->inbufCount <= 0) {
+ bd->io_error = RETVAL_UNEXPECTED_INPUT_EOF;
+ return 0;
+ }
+ bd->inbufPos = 0;
+ }
+ /* Avoid 32-bit overflow (dump bit buffer to top of output) */
+ if (bd->inbufBitCount >= 24) {
+ bits = bd->inbufBits&((1 << bd->inbufBitCount)-1);
+ bits_wanted -= bd->inbufBitCount;
+ bits <<= bits_wanted;
+ bd->inbufBitCount = 0;
+ }
+ /* Grab next 8 bits of input from buffer. */
+ bd->inbufBits = (bd->inbufBits << 8)|bd->inbuf[bd->inbufPos++];
+ bd->inbufBitCount += 8;
+ }
+ /* Calculate result */
+ bd->inbufBitCount -= bits_wanted;
+ bits |= (bd->inbufBits >> bd->inbufBitCount)&((1 << bits_wanted)-1);
+
+ return bits;
+}
+
+/* Unpacks the next block and sets up for the inverse burrows-wheeler step. */
+
+static int INIT get_next_block(struct bunzip_data *bd)
+{
+ struct group_data *hufGroup = NULL;
+ int *base = NULL;
+ int *limit = NULL;
+ int dbufCount, nextSym, dbufSize, groupCount, selector,
+ i, j, k, t, runPos, symCount, symTotal, nSelectors,
+ byteCount[256];
+ unsigned char uc, symToByte[256], mtfSymbol[256], *selectors;
+ unsigned int *dbuf, origPtr;
+
+ dbuf = bd->dbuf;
+ dbufSize = bd->dbufSize;
+ selectors = bd->selectors;
+
+ /* Read in header signature and CRC, then validate signature.
+ (last block signature means CRC is for whole file, return now) */
+ i = get_bits(bd, 24);
+ j = get_bits(bd, 24);
+ bd->headerCRC = get_bits(bd, 32);
+ if ((i == 0x177245) && (j == 0x385090))
+ return RETVAL_LAST_BLOCK;
+ if ((i != 0x314159) || (j != 0x265359))
+ return RETVAL_NOT_BZIP_DATA;
+ /* We can add support for blockRandomised if anybody complains.
+ There was some code for this in busybox 1.0.0-pre3, but nobody ever
+ noticed that it didn't actually work. */
+ if (get_bits(bd, 1))
+ return RETVAL_OBSOLETE_INPUT;
+ origPtr = get_bits(bd, 24);
+ if (origPtr > dbufSize)
+ return RETVAL_DATA_ERROR;
+ /* mapping table: if some byte values are never used (encoding things
+ like ascii text), the compression code removes the gaps to have fewer
+ symbols to deal with, and writes a sparse bitfield indicating which
+ values were present. We make a translation table to convert the
+ symbols back to the corresponding bytes. */
+ t = get_bits(bd, 16);
+ symTotal = 0;
+ for (i = 0; i < 16; i++) {
+ if (t&(1 << (15-i))) {
+ k = get_bits(bd, 16);
+ for (j = 0; j < 16; j++)
+ if (k&(1 << (15-j)))
+ symToByte[symTotal++] = (16*i)+j;
+ }
+ }
+ /* How many different Huffman coding groups does this block use? */
+ groupCount = get_bits(bd, 3);
+ if (groupCount < 2 || groupCount > MAX_GROUPS)
+ return RETVAL_DATA_ERROR;
+ /* nSelectors: Every GROUP_SIZE many symbols we select a new
+ Huffman coding group. Read in the group selector list,
+ which is stored as MTF encoded bit runs. (MTF = Move To
+ Front, as each value is used it's moved to the start of the
+ list.) */
+ nSelectors = get_bits(bd, 15);
+ if (!nSelectors)
+ return RETVAL_DATA_ERROR;
+ for (i = 0; i < groupCount; i++)
+ mtfSymbol[i] = i;
+ for (i = 0; i < nSelectors; i++) {
+ /* Get next value */
+ for (j = 0; get_bits(bd, 1); j++)
+ if (j >= groupCount)
+ return RETVAL_DATA_ERROR;
+ /* Decode MTF to get the next selector */
+ uc = mtfSymbol[j];
+ for (; j; j--)
+ mtfSymbol[j] = mtfSymbol[j-1];
+ mtfSymbol[0] = selectors[i] = uc;
+ }
+ /* Read the Huffman coding tables for each group, which code
+ for symTotal literal symbols, plus two run symbols (RUNA,
+ RUNB) */
+ symCount = symTotal+2;
+ for (j = 0; j < groupCount; j++) {
+ unsigned char length[MAX_SYMBOLS], temp[MAX_HUFCODE_BITS+1];
+ int minLen, maxLen, pp;
+ /* Read Huffman code lengths for each symbol. They're
+ stored in a way similar to mtf; record a starting
+ value for the first symbol, and an offset from the
+ previous value for everys symbol after that.
+ (Subtracting 1 before the loop and then adding it
+ back at the end is an optimization that makes the
+ test inside the loop simpler: symbol length 0
+ becomes negative, so an unsigned inequality catches
+ it.) */
+ t = get_bits(bd, 5)-1;
+ for (i = 0; i < symCount; i++) {
+ for (;;) {
+ if (((unsigned)t) > (MAX_HUFCODE_BITS-1))
+ return RETVAL_DATA_ERROR;
+
+ /* If first bit is 0, stop. Else
+ second bit indicates whether to
+ increment or decrement the value.
+ Optimization: grab 2 bits and unget
+ the second if the first was 0. */
+
+ k = get_bits(bd, 2);
+ if (k < 2) {
+ bd->inbufBitCount++;
+ break;
+ }
+ /* Add one if second bit 1, else
+ * subtract 1. Avoids if/else */
+ t += (((k+1)&2)-1);
+ }
+ /* Correct for the initial -1, to get the
+ * final symbol length */
+ length[i] = t+1;
+ }
+ /* Find largest and smallest lengths in this group */
+ minLen = maxLen = length[0];
+
+ for (i = 1; i < symCount; i++) {
+ if (length[i] > maxLen)
+ maxLen = length[i];
+ else if (length[i] < minLen)
+ minLen = length[i];
+ }
+
+ /* Calculate permute[], base[], and limit[] tables from
+ * length[].
+ *
+ * permute[] is the lookup table for converting
+ * Huffman coded symbols into decoded symbols. base[]
+ * is the amount to subtract from the value of a
+ * Huffman symbol of a given length when using
+ * permute[].
+ *
+ * limit[] indicates the largest numerical value a
+ * symbol with a given number of bits can have. This
+ * is how the Huffman codes can vary in length: each
+ * code with a value > limit[length] needs another
+ * bit.
+ */
+ hufGroup = bd->groups+j;
+ hufGroup->minLen = minLen;
+ hufGroup->maxLen = maxLen;
+ /* Note that minLen can't be smaller than 1, so we
+ adjust the base and limit array pointers so we're
+ not always wasting the first entry. We do this
+ again when using them (during symbol decoding).*/
+ base = hufGroup->base-1;
+ limit = hufGroup->limit-1;
+ /* Calculate permute[]. Concurently, initialize
+ * temp[] and limit[]. */
+ pp = 0;
+ for (i = minLen; i <= maxLen; i++) {
+ temp[i] = limit[i] = 0;
+ for (t = 0; t < symCount; t++)
+ if (length[t] == i)
+ hufGroup->permute[pp++] = t;
+ }
+ /* Count symbols coded for at each bit length */
+ for (i = 0; i < symCount; i++)
+ temp[length[i]]++;
+ /* Calculate limit[] (the largest symbol-coding value
+ *at each bit length, which is (previous limit <<
+ *1)+symbols at this level), and base[] (number of
+ *symbols to ignore at each bit length, which is limit
+ *minus the cumulative count of symbols coded for
+ *already). */
+ pp = t = 0;
+ for (i = minLen; i < maxLen; i++) {
+ pp += temp[i];
+ /* We read the largest possible symbol size
+ and then unget bits after determining how
+ many we need, and those extra bits could be
+ set to anything. (They're noise from
+ future symbols.) At each level we're
+ really only interested in the first few
+ bits, so here we set all the trailing
+ to-be-ignored bits to 1 so they don't
+ affect the value > limit[length]
+ comparison. */
+ limit[i] = (pp << (maxLen - i)) - 1;
+ pp <<= 1;
+ base[i+1] = pp-(t += temp[i]);
+ }
+ limit[maxLen+1] = INT_MAX; /* Sentinal value for
+ * reading next sym. */
+ limit[maxLen] = pp+temp[maxLen]-1;
+ base[minLen] = 0;
+ }
+ /* We've finished reading and digesting the block header. Now
+ read this block's Huffman coded symbols from the file and
+ undo the Huffman coding and run length encoding, saving the
+ result into dbuf[dbufCount++] = uc */
+
+ /* Initialize symbol occurrence counters and symbol Move To
+ * Front table */
+ for (i = 0; i < 256; i++) {
+ byteCount[i] = 0;
+ mtfSymbol[i] = (unsigned char)i;
+ }
+ /* Loop through compressed symbols. */
+ runPos = dbufCount = symCount = selector = 0;
+ for (;;) {
+ /* Determine which Huffman coding group to use. */
+ if (!(symCount--)) {
+ symCount = GROUP_SIZE-1;
+ if (selector >= nSelectors)
+ return RETVAL_DATA_ERROR;
+ hufGroup = bd->groups+selectors[selector++];
+ base = hufGroup->base-1;
+ limit = hufGroup->limit-1;
+ }
+ /* Read next Huffman-coded symbol. */
+ /* Note: It is far cheaper to read maxLen bits and
+ back up than it is to read minLen bits and then an
+ additional bit at a time, testing as we go.
+ Because there is a trailing last block (with file
+ CRC), there is no danger of the overread causing an
+ unexpected EOF for a valid compressed file. As a
+ further optimization, we do the read inline
+ (falling back to a call to get_bits if the buffer
+ runs dry). The following (up to got_huff_bits:) is
+ equivalent to j = get_bits(bd, hufGroup->maxLen);
+ */
+ while (bd->inbufBitCount < hufGroup->maxLen) {
+ if (bd->inbufPos == bd->inbufCount) {
+ j = get_bits(bd, hufGroup->maxLen);
+ goto got_huff_bits;
+ }
+ bd->inbufBits =
+ (bd->inbufBits << 8)|bd->inbuf[bd->inbufPos++];
+ bd->inbufBitCount += 8;
+ };
+ bd->inbufBitCount -= hufGroup->maxLen;
+ j = (bd->inbufBits >> bd->inbufBitCount)&
+ ((1 << hufGroup->maxLen)-1);
+got_huff_bits:
+ /* Figure how how many bits are in next symbol and
+ * unget extras */
+ i = hufGroup->minLen;
+ while (j > limit[i])
+ ++i;
+ bd->inbufBitCount += (hufGroup->maxLen - i);
+ /* Huffman decode value to get nextSym (with bounds checking) */
+ if ((i > hufGroup->maxLen)
+ || (((unsigned)(j = (j>>(hufGroup->maxLen-i))-base[i]))
+ >= MAX_SYMBOLS))
+ return RETVAL_DATA_ERROR;
+ nextSym = hufGroup->permute[j];
+ /* We have now decoded the symbol, which indicates
+ either a new literal byte, or a repeated run of the
+ most recent literal byte. First, check if nextSym
+ indicates a repeated run, and if so loop collecting
+ how many times to repeat the last literal. */
+ if (((unsigned)nextSym) <= SYMBOL_RUNB) { /* RUNA or RUNB */
+ /* If this is the start of a new run, zero out
+ * counter */
+ if (!runPos) {
+ runPos = 1;
+ t = 0;
+ }
+ /* Neat trick that saves 1 symbol: instead of
+ or-ing 0 or 1 at each bit position, add 1
+ or 2 instead. For example, 1011 is 1 << 0
+ + 1 << 1 + 2 << 2. 1010 is 2 << 0 + 2 << 1
+ + 1 << 2. You can make any bit pattern
+ that way using 1 less symbol than the basic
+ or 0/1 method (except all bits 0, which
+ would use no symbols, but a run of length 0
+ doesn't mean anything in this context).
+ Thus space is saved. */
+ t += (runPos << nextSym);
+ /* +runPos if RUNA; +2*runPos if RUNB */
+
+ runPos <<= 1;
+ continue;
+ }
+ /* When we hit the first non-run symbol after a run,
+ we now know how many times to repeat the last
+ literal, so append that many copies to our buffer
+ of decoded symbols (dbuf) now. (The last literal
+ used is the one at the head of the mtfSymbol
+ array.) */
+ if (runPos) {
+ runPos = 0;
+ if (dbufCount+t >= dbufSize)
+ return RETVAL_DATA_ERROR;
+
+ uc = symToByte[mtfSymbol[0]];
+ byteCount[uc] += t;
+ while (t--)
+ dbuf[dbufCount++] = uc;
+ }
+ /* Is this the terminating symbol? */
+ if (nextSym > symTotal)
+ break;
+ /* At this point, nextSym indicates a new literal
+ character. Subtract one to get the position in the
+ MTF array at which this literal is currently to be
+ found. (Note that the result can't be -1 or 0,
+ because 0 and 1 are RUNA and RUNB. But another
+ instance of the first symbol in the mtf array,
+ position 0, would have been handled as part of a
+ run above. Therefore 1 unused mtf position minus 2
+ non-literal nextSym values equals -1.) */
+ if (dbufCount >= dbufSize)
+ return RETVAL_DATA_ERROR;
+ i = nextSym - 1;
+ uc = mtfSymbol[i];
+ /* Adjust the MTF array. Since we typically expect to
+ *move only a small number of symbols, and are bound
+ *by 256 in any case, using memmove here would
+ *typically be bigger and slower due to function call
+ *overhead and other assorted setup costs. */
+ do {
+ mtfSymbol[i] = mtfSymbol[i-1];
+ } while (--i);
+ mtfSymbol[0] = uc;
+ uc = symToByte[uc];
+ /* We have our literal byte. Save it into dbuf. */
+ byteCount[uc]++;
+ dbuf[dbufCount++] = (unsigned int)uc;
+ }
+ /* At this point, we've read all the Huffman-coded symbols
+ (and repeated runs) for this block from the input stream,
+ and decoded them into the intermediate buffer. There are
+ dbufCount many decoded bytes in dbuf[]. Now undo the
+ Burrows-Wheeler transform on dbuf. See
+ http://dogma.net/markn/articles/bwt/bwt.htm
+ */
+ /* Turn byteCount into cumulative occurrence counts of 0 to n-1. */
+ j = 0;
+ for (i = 0; i < 256; i++) {
+ k = j+byteCount[i];
+ byteCount[i] = j;
+ j = k;
+ }
+ /* Figure out what order dbuf would be in if we sorted it. */
+ for (i = 0; i < dbufCount; i++) {
+ uc = (unsigned char)(dbuf[i] & 0xff);
+ dbuf[byteCount[uc]] |= (i << 8);
+ byteCount[uc]++;
+ }
+ /* Decode first byte by hand to initialize "previous" byte.
+ Note that it doesn't get output, and if the first three
+ characters are identical it doesn't qualify as a run (hence
+ writeRunCountdown = 5). */
+ if (dbufCount) {
+ if (origPtr >= dbufCount)
+ return RETVAL_DATA_ERROR;
+ bd->writePos = dbuf[origPtr];
+ bd->writeCurrent = (unsigned char)(bd->writePos&0xff);
+ bd->writePos >>= 8;
+ bd->writeRunCountdown = 5;
+ }
+ bd->writeCount = dbufCount;
+
+ return RETVAL_OK;
+}
+
+/* Undo burrows-wheeler transform on intermediate buffer to produce output.
+ If start_bunzip was initialized with out_fd =-1, then up to len bytes of
+ data are written to outbuf. Return value is number of bytes written or
+ error (all errors are negative numbers). If out_fd!=-1, outbuf and len
+ are ignored, data is written to out_fd and return is RETVAL_OK or error.
+*/
+
+static int INIT read_bunzip(struct bunzip_data *bd, char *outbuf, int len)
+{
+ const unsigned int *dbuf;
+ int pos, xcurrent, previous, gotcount;
+
+ /* If last read was short due to end of file, return last block now */
+ if (bd->writeCount < 0)
+ return bd->writeCount;
+
+ gotcount = 0;
+ dbuf = bd->dbuf;
+ pos = bd->writePos;
+ xcurrent = bd->writeCurrent;
+
+ /* We will always have pending decoded data to write into the output
+ buffer unless this is the very first call (in which case we haven't
+ Huffman-decoded a block into the intermediate buffer yet). */
+
+ if (bd->writeCopies) {
+ /* Inside the loop, writeCopies means extra copies (beyond 1) */
+ --bd->writeCopies;
+ /* Loop outputting bytes */
+ for (;;) {
+ /* If the output buffer is full, snapshot
+ * state and return */
+ if (gotcount >= len) {
+ bd->writePos = pos;
+ bd->writeCurrent = xcurrent;
+ bd->writeCopies++;
+ return len;
+ }
+ /* Write next byte into output buffer, updating CRC */
+ outbuf[gotcount++] = xcurrent;
+ bd->writeCRC = (((bd->writeCRC) << 8)
+ ^bd->crc32Table[((bd->writeCRC) >> 24)
+ ^xcurrent]);
+ /* Loop now if we're outputting multiple
+ * copies of this byte */
+ if (bd->writeCopies) {
+ --bd->writeCopies;
+ continue;
+ }
+decode_next_byte:
+ if (!bd->writeCount--)
+ break;
+ /* Follow sequence vector to undo
+ * Burrows-Wheeler transform */
+ previous = xcurrent;
+ pos = dbuf[pos];
+ xcurrent = pos&0xff;
+ pos >>= 8;
+ /* After 3 consecutive copies of the same
+ byte, the 4th is a repeat count. We count
+ down from 4 instead *of counting up because
+ testing for non-zero is faster */
+ if (--bd->writeRunCountdown) {
+ if (xcurrent != previous)
+ bd->writeRunCountdown = 4;
+ } else {
+ /* We have a repeated run, this byte
+ * indicates the count */
+ bd->writeCopies = xcurrent;
+ xcurrent = previous;
+ bd->writeRunCountdown = 5;
+ /* Sometimes there are just 3 bytes
+ * (run length 0) */
+ if (!bd->writeCopies)
+ goto decode_next_byte;
+ /* Subtract the 1 copy we'd output
+ * anyway to get extras */
+ --bd->writeCopies;
+ }
+ }
+ /* Decompression of this block completed successfully */
+ bd->writeCRC = ~bd->writeCRC;
+ bd->totalCRC = ((bd->totalCRC << 1) |
+ (bd->totalCRC >> 31)) ^ bd->writeCRC;
+ /* If this block had a CRC error, force file level CRC error. */
+ if (bd->writeCRC != bd->headerCRC) {
+ bd->totalCRC = bd->headerCRC+1;
+ return RETVAL_LAST_BLOCK;
+ }
+ }
+
+ /* Refill the intermediate buffer by Huffman-decoding next
+ * block of input */
+ /* (previous is just a convenient unused temp variable here) */
+ previous = get_next_block(bd);
+ if (previous) {
+ bd->writeCount = previous;
+ return (previous != RETVAL_LAST_BLOCK) ? previous : gotcount;
+ }
+ bd->writeCRC = 0xffffffffUL;
+ pos = bd->writePos;
+ xcurrent = bd->writeCurrent;
+ goto decode_next_byte;
+}
+
+static int INIT nofill(void *buf, unsigned int len)
+{
+ return -1;
+}
+
+/* Allocate the structure, read file header. If in_fd ==-1, inbuf must contain
+ a complete bunzip file (len bytes long). If in_fd!=-1, inbuf and len are
+ ignored, and data is read from file handle into temporary buffer. */
+static int INIT start_bunzip(struct bunzip_data **bdp, void *inbuf, int len,
+ int (*fill)(void*, unsigned int))
+{
+ struct bunzip_data *bd;
+ unsigned int i, j, c;
+ const unsigned int BZh0 =
+ (((unsigned int)'B') << 24)+(((unsigned int)'Z') << 16)
+ +(((unsigned int)'h') << 8)+(unsigned int)'0';
+
+ /* Figure out how much data to allocate */
+ i = sizeof(struct bunzip_data);
+
+ /* Allocate bunzip_data. Most fields initialize to zero. */
+ bd = *bdp = malloc(i);
+ memset(bd, 0, sizeof(struct bunzip_data));
+ /* Setup input buffer */
+ bd->inbuf = inbuf;
+ bd->inbufCount = len;
+ if (fill != NULL)
+ bd->fill = fill;
+ else
+ bd->fill = nofill;
+
+ /* Init the CRC32 table (big endian) */
+ for (i = 0; i < 256; i++) {
+ c = i << 24;
+ for (j = 8; j; j--)
+ c = c&0x80000000 ? (c << 1)^0x04c11db7 : (c << 1);
+ bd->crc32Table[i] = c;
+ }
+
+ /* Ensure that file starts with "BZh['1'-'9']." */
+ i = get_bits(bd, 32);
+ if (((unsigned int)(i-BZh0-1)) >= 9)
+ return RETVAL_NOT_BZIP_DATA;
+
+ /* Fourth byte (ascii '1'-'9'), indicates block size in units of 100k of
+ uncompressed data. Allocate intermediate buffer for block. */
+ bd->dbufSize = 100000*(i-BZh0);
+
+ bd->dbuf = large_malloc(bd->dbufSize * sizeof(int));
+ return RETVAL_OK;
+}
+
+/* Example usage: decompress src_fd to dst_fd. (Stops at end of bzip2 data,
+ not end of file.) */
+STATIC int INIT bunzip2(unsigned char *buf, int len,
+ int(*fill)(void*, unsigned int),
+ int(*flush)(void*, unsigned int),
+ unsigned char *outbuf,
+ int *pos,
+ void(*error_fn)(char *x))
+{
+ struct bunzip_data *bd;
+ int i = -1;
+ unsigned char *inbuf;
+
+ set_error_fn(error_fn);
+ if (flush)
+ outbuf = malloc(BZIP2_IOBUF_SIZE);
+ else
+ len -= 4; /* Uncompressed size hack active in pre-boot
+ environment */
+ if (!outbuf) {
+ error("Could not allocate output bufer");
+ return -1;
+ }
+ if (buf)
+ inbuf = buf;
+ else
+ inbuf = malloc(BZIP2_IOBUF_SIZE);
+ if (!inbuf) {
+ error("Could not allocate input bufer");
+ goto exit_0;
+ }
+ i = start_bunzip(&bd, inbuf, len, fill);
+ if (!i) {
+ for (;;) {
+ i = read_bunzip(bd, outbuf, BZIP2_IOBUF_SIZE);
+ if (i <= 0)
+ break;
+ if (!flush)
+ outbuf += i;
+ else
+ if (i != flush(outbuf, i)) {
+ i = RETVAL_UNEXPECTED_OUTPUT_EOF;
+ break;
+ }
+ }
+ }
+ /* Check CRC and release memory */
+ if (i == RETVAL_LAST_BLOCK) {
+ if (bd->headerCRC != bd->totalCRC)
+ error("Data integrity error when decompressing.");
+ else
+ i = RETVAL_OK;
+ } else if (i == RETVAL_UNEXPECTED_OUTPUT_EOF) {
+ error("Compressed file ends unexpectedly");
+ }
+ if (bd->dbuf)
+ large_free(bd->dbuf);
+ if (pos)
+ *pos = bd->inbufPos;
+ free(bd);
+ if (!buf)
+ free(inbuf);
+exit_0:
+ if (flush)
+ free(outbuf);
+ return i;
+}
+
+#define decompress bunzip2
--- /dev/null
+#ifdef STATIC
+/* Pre-boot environment: included */
+
+/* prevent inclusion of _LINUX_KERNEL_H in pre-boot environment: lots
+ * errors about console_printk etc... on ARM */
+#define _LINUX_KERNEL_H
+
+#include "zlib_inflate/inftrees.c"
+#include "zlib_inflate/inffast.c"
+#include "zlib_inflate/inflate.c"
+
+#else /* STATIC */
+/* initramfs et al: linked */
+
+#include <linux/zutil.h>
+
+#include "zlib_inflate/inftrees.h"
+#include "zlib_inflate/inffast.h"
+#include "zlib_inflate/inflate.h"
+
+#include "zlib_inflate/infutil.h"
+
+#endif /* STATIC */
+
+#include <linux/decompress/mm.h>
+
+#define INBUF_LEN (16*1024)
+
+/* Included from initramfs et al code */
+STATIC int INIT gunzip(unsigned char *buf, int len,
+ int(*fill)(void*, unsigned int),
+ int(*flush)(void*, unsigned int),
+ unsigned char *out_buf,
+ int *pos,
+ void(*error_fn)(char *x)) {
+ u8 *zbuf;
+ struct z_stream_s *strm;
+ int rc;
+ size_t out_len;
+
+ set_error_fn(error_fn);
+ rc = -1;
+ if (flush) {
+ out_len = 0x8000; /* 32 K */
+ out_buf = malloc(out_len);
+ } else {
+ out_len = 0x7fffffff; /* no limit */
+ }
+ if (!out_buf) {
+ error("Out of memory while allocating output buffer");
+ goto gunzip_nomem1;
+ }
+
+ if (buf)
+ zbuf = buf;
+ else {
+ zbuf = malloc(INBUF_LEN);
+ len = 0;
+ }
+ if (!zbuf) {
+ error("Out of memory while allocating input buffer");
+ goto gunzip_nomem2;
+ }
+
+ strm = malloc(sizeof(*strm));
+ if (strm == NULL) {
+ error("Out of memory while allocating z_stream");
+ goto gunzip_nomem3;
+ }
+
+ strm->workspace = malloc(flush ? zlib_inflate_workspacesize() :
+ sizeof(struct inflate_state));
+ if (strm->workspace == NULL) {
+ error("Out of memory while allocating workspace");
+ goto gunzip_nomem4;
+ }
+
+ if (len == 0)
+ len = fill(zbuf, INBUF_LEN);
+
+ /* verify the gzip header */
+ if (len < 10 ||
+ zbuf[0] != 0x1f || zbuf[1] != 0x8b || zbuf[2] != 0x08) {
+ if (pos)
+ *pos = 0;
+ error("Not a gzip file");
+ goto gunzip_5;
+ }
+
+ /* skip over gzip header (1f,8b,08... 10 bytes total +
+ * possible asciz filename)
+ */
+ strm->next_in = zbuf + 10;
+ /* skip over asciz filename */
+ if (zbuf[3] & 0x8) {
+ while (strm->next_in[0])
+ strm->next_in++;
+ strm->next_in++;
+ }
+ strm->avail_in = len - (strm->next_in - zbuf);
+
+ strm->next_out = out_buf;
+ strm->avail_out = out_len;
+
+ rc = zlib_inflateInit2(strm, -MAX_WBITS);
+
+ if (!flush) {
+ WS(strm)->inflate_state.wsize = 0;
+ WS(strm)->inflate_state.window = NULL;
+ }
+
+ while (rc == Z_OK) {
+ if (strm->avail_in == 0) {
+ /* TODO: handle case where both pos and fill are set */
+ len = fill(zbuf, INBUF_LEN);
+ if (len < 0) {
+ rc = -1;
+ error("read error");
+ break;
+ }
+ strm->next_in = zbuf;
+ strm->avail_in = len;
+ }
+ rc = zlib_inflate(strm, 0);
+
+ /* Write any data generated */
+ if (flush && strm->next_out > out_buf) {
+ int l = strm->next_out - out_buf;
+ if (l != flush(out_buf, l)) {
+ rc = -1;
+ error("write error");
+ break;
+ }
+ strm->next_out = out_buf;
+ strm->avail_out = out_len;
+ }
+
+ /* after Z_FINISH, only Z_STREAM_END is "we unpacked it all" */
+ if (rc == Z_STREAM_END) {
+ rc = 0;
+ break;
+ } else if (rc != Z_OK) {
+ error("uncompression error");
+ rc = -1;
+ }
+ }
+
+ zlib_inflateEnd(strm);
+ if (pos)
+ /* add + 8 to skip over trailer */
+ *pos = strm->next_in - zbuf+8;
+
+gunzip_5:
+ free(strm->workspace);
+gunzip_nomem4:
+ free(strm);
+gunzip_nomem3:
+ if (!buf)
+ free(zbuf);
+gunzip_nomem2:
+ if (flush)
+ free(out_buf);
+gunzip_nomem1:
+ return rc; /* returns Z_OK (0) if successful */
+}
+
+#define decompress gunzip
--- /dev/null
+/* Lzma decompressor for Linux kernel. Shamelessly snarfed
+ *from busybox 1.1.1
+ *
+ *Linux kernel adaptation
+ *Copyright (C) 2006 Alain < alain@knaff.lu >
+ *
+ *Based on small lzma deflate implementation/Small range coder
+ *implementation for lzma.
+ *Copyright (C) 2006 Aurelien Jacobs < aurel@gnuage.org >
+ *
+ *Based on LzmaDecode.c from the LZMA SDK 4.22 (http://www.7-zip.org/)
+ *Copyright (C) 1999-2005 Igor Pavlov
+ *
+ *Copyrights of the parts, see headers below.
+ *
+ *
+ *This program is free software; you can redistribute it and/or
+ *modify it under the terms of the GNU Lesser General Public
+ *License as published by the Free Software Foundation; either
+ *version 2.1 of the License, or (at your option) any later version.
+ *
+ *This program is distributed in the hope that it will be useful,
+ *but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ *Lesser General Public License for more details.
+ *
+ *You should have received a copy of the GNU Lesser General Public
+ *License along with this library; if not, write to the Free Software
+ *Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef STATIC
+#include <linux/decompress/unlzma.h>
+#endif /* STATIC */
+
+#include <linux/decompress/mm.h>
+
+#define MIN(a, b) (((a) < (b)) ? (a) : (b))
+
+static long long INIT read_int(unsigned char *ptr, int size)
+{
+ int i;
+ long long ret = 0;
+
+ for (i = 0; i < size; i++)
+ ret = (ret << 8) | ptr[size-i-1];
+ return ret;
+}
+
+#define ENDIAN_CONVERT(x) \
+ x = (typeof(x))read_int((unsigned char *)&x, sizeof(x))
+
+
+/* Small range coder implementation for lzma.
+ *Copyright (C) 2006 Aurelien Jacobs < aurel@gnuage.org >
+ *
+ *Based on LzmaDecode.c from the LZMA SDK 4.22 (http://www.7-zip.org/)
+ *Copyright (c) 1999-2005 Igor Pavlov
+ */
+
+#include <linux/compiler.h>
+
+#define LZMA_IOBUF_SIZE 0x10000
+
+struct rc {
+ int (*fill)(void*, unsigned int);
+ uint8_t *ptr;
+ uint8_t *buffer;
+ uint8_t *buffer_end;
+ int buffer_size;
+ uint32_t code;
+ uint32_t range;
+ uint32_t bound;
+};
+
+
+#define RC_TOP_BITS 24
+#define RC_MOVE_BITS 5
+#define RC_MODEL_TOTAL_BITS 11
+
+
+/* Called twice: once at startup and once in rc_normalize() */
+static void INIT rc_read(struct rc *rc)
+{
+ rc->buffer_size = rc->fill((char *)rc->buffer, LZMA_IOBUF_SIZE);
+ if (rc->buffer_size <= 0)
+ error("unexpected EOF");
+ rc->ptr = rc->buffer;
+ rc->buffer_end = rc->buffer + rc->buffer_size;
+}
+
+/* Called once */
+static inline void INIT rc_init(struct rc *rc,
+ int (*fill)(void*, unsigned int),
+ char *buffer, int buffer_size)
+{
+ rc->fill = fill;
+ rc->buffer = (uint8_t *)buffer;
+ rc->buffer_size = buffer_size;
+ rc->buffer_end = rc->buffer + rc->buffer_size;
+ rc->ptr = rc->buffer;
+
+ rc->code = 0;
+ rc->range = 0xFFFFFFFF;
+}
+
+static inline void INIT rc_init_code(struct rc *rc)
+{
+ int i;
+
+ for (i = 0; i < 5; i++) {
+ if (rc->ptr >= rc->buffer_end)
+ rc_read(rc);
+ rc->code = (rc->code << 8) | *rc->ptr++;
+ }
+}
+
+
+/* Called once. TODO: bb_maybe_free() */
+static inline void INIT rc_free(struct rc *rc)
+{
+ free(rc->buffer);
+}
+
+/* Called twice, but one callsite is in inline'd rc_is_bit_0_helper() */
+static void INIT rc_do_normalize(struct rc *rc)
+{
+ if (rc->ptr >= rc->buffer_end)
+ rc_read(rc);
+ rc->range <<= 8;
+ rc->code = (rc->code << 8) | *rc->ptr++;
+}
+static inline void INIT rc_normalize(struct rc *rc)
+{
+ if (rc->range < (1 << RC_TOP_BITS))
+ rc_do_normalize(rc);
+}
+
+/* Called 9 times */
+/* Why rc_is_bit_0_helper exists?
+ *Because we want to always expose (rc->code < rc->bound) to optimizer
+ */
+static inline uint32_t INIT rc_is_bit_0_helper(struct rc *rc, uint16_t *p)
+{
+ rc_normalize(rc);
+ rc->bound = *p * (rc->range >> RC_MODEL_TOTAL_BITS);
+ return rc->bound;
+}
+static inline int INIT rc_is_bit_0(struct rc *rc, uint16_t *p)
+{
+ uint32_t t = rc_is_bit_0_helper(rc, p);
+ return rc->code < t;
+}
+
+/* Called ~10 times, but very small, thus inlined */
+static inline void INIT rc_update_bit_0(struct rc *rc, uint16_t *p)
+{
+ rc->range = rc->bound;
+ *p += ((1 << RC_MODEL_TOTAL_BITS) - *p) >> RC_MOVE_BITS;
+}
+static inline void rc_update_bit_1(struct rc *rc, uint16_t *p)
+{
+ rc->range -= rc->bound;
+ rc->code -= rc->bound;
+ *p -= *p >> RC_MOVE_BITS;
+}
+
+/* Called 4 times in unlzma loop */
+static int INIT rc_get_bit(struct rc *rc, uint16_t *p, int *symbol)
+{
+ if (rc_is_bit_0(rc, p)) {
+ rc_update_bit_0(rc, p);
+ *symbol *= 2;
+ return 0;
+ } else {
+ rc_update_bit_1(rc, p);
+ *symbol = *symbol * 2 + 1;
+ return 1;
+ }
+}
+
+/* Called once */
+static inline int INIT rc_direct_bit(struct rc *rc)
+{
+ rc_normalize(rc);
+ rc->range >>= 1;
+ if (rc->code >= rc->range) {
+ rc->code -= rc->range;
+ return 1;
+ }
+ return 0;
+}
+
+/* Called twice */
+static inline void INIT
+rc_bit_tree_decode(struct rc *rc, uint16_t *p, int num_levels, int *symbol)
+{
+ int i = num_levels;
+
+ *symbol = 1;
+ while (i--)
+ rc_get_bit(rc, p + *symbol, symbol);
+ *symbol -= 1 << num_levels;
+}
+
+
+/*
+ * Small lzma deflate implementation.
+ * Copyright (C) 2006 Aurelien Jacobs < aurel@gnuage.org >
+ *
+ * Based on LzmaDecode.c from the LZMA SDK 4.22 (http://www.7-zip.org/)
+ * Copyright (C) 1999-2005 Igor Pavlov
+ */
+
+
+struct lzma_header {
+ uint8_t pos;
+ uint32_t dict_size;
+ uint64_t dst_size;
+} __attribute__ ((packed)) ;
+
+
+#define LZMA_BASE_SIZE 1846
+#define LZMA_LIT_SIZE 768
+
+#define LZMA_NUM_POS_BITS_MAX 4
+
+#define LZMA_LEN_NUM_LOW_BITS 3
+#define LZMA_LEN_NUM_MID_BITS 3
+#define LZMA_LEN_NUM_HIGH_BITS 8
+
+#define LZMA_LEN_CHOICE 0
+#define LZMA_LEN_CHOICE_2 (LZMA_LEN_CHOICE + 1)
+#define LZMA_LEN_LOW (LZMA_LEN_CHOICE_2 + 1)
+#define LZMA_LEN_MID (LZMA_LEN_LOW \
+ + (1 << (LZMA_NUM_POS_BITS_MAX + LZMA_LEN_NUM_LOW_BITS)))
+#define LZMA_LEN_HIGH (LZMA_LEN_MID \
+ +(1 << (LZMA_NUM_POS_BITS_MAX + LZMA_LEN_NUM_MID_BITS)))
+#define LZMA_NUM_LEN_PROBS (LZMA_LEN_HIGH + (1 << LZMA_LEN_NUM_HIGH_BITS))
+
+#define LZMA_NUM_STATES 12
+#define LZMA_NUM_LIT_STATES 7
+
+#define LZMA_START_POS_MODEL_INDEX 4
+#define LZMA_END_POS_MODEL_INDEX 14
+#define LZMA_NUM_FULL_DISTANCES (1 << (LZMA_END_POS_MODEL_INDEX >> 1))
+
+#define LZMA_NUM_POS_SLOT_BITS 6
+#define LZMA_NUM_LEN_TO_POS_STATES 4
+
+#define LZMA_NUM_ALIGN_BITS 4
+
+#define LZMA_MATCH_MIN_LEN 2
+
+#define LZMA_IS_MATCH 0
+#define LZMA_IS_REP (LZMA_IS_MATCH + (LZMA_NUM_STATES << LZMA_NUM_POS_BITS_MAX))
+#define LZMA_IS_REP_G0 (LZMA_IS_REP + LZMA_NUM_STATES)
+#define LZMA_IS_REP_G1 (LZMA_IS_REP_G0 + LZMA_NUM_STATES)
+#define LZMA_IS_REP_G2 (LZMA_IS_REP_G1 + LZMA_NUM_STATES)
+#define LZMA_IS_REP_0_LONG (LZMA_IS_REP_G2 + LZMA_NUM_STATES)
+#define LZMA_POS_SLOT (LZMA_IS_REP_0_LONG \
+ + (LZMA_NUM_STATES << LZMA_NUM_POS_BITS_MAX))
+#define LZMA_SPEC_POS (LZMA_POS_SLOT \
+ +(LZMA_NUM_LEN_TO_POS_STATES << LZMA_NUM_POS_SLOT_BITS))
+#define LZMA_ALIGN (LZMA_SPEC_POS \
+ + LZMA_NUM_FULL_DISTANCES - LZMA_END_POS_MODEL_INDEX)
+#define LZMA_LEN_CODER (LZMA_ALIGN + (1 << LZMA_NUM_ALIGN_BITS))
+#define LZMA_REP_LEN_CODER (LZMA_LEN_CODER + LZMA_NUM_LEN_PROBS)
+#define LZMA_LITERAL (LZMA_REP_LEN_CODER + LZMA_NUM_LEN_PROBS)
+
+
+struct writer {
+ uint8_t *buffer;
+ uint8_t previous_byte;
+ size_t buffer_pos;
+ int bufsize;
+ size_t global_pos;
+ int(*flush)(void*, unsigned int);
+ struct lzma_header *header;
+};
+
+struct cstate {
+ int state;
+ uint32_t rep0, rep1, rep2, rep3;
+};
+
+static inline size_t INIT get_pos(struct writer *wr)
+{
+ return
+ wr->global_pos + wr->buffer_pos;
+}
+
+static inline uint8_t INIT peek_old_byte(struct writer *wr,
+ uint32_t offs)
+{
+ if (!wr->flush) {
+ int32_t pos;
+ while (offs > wr->header->dict_size)
+ offs -= wr->header->dict_size;
+ pos = wr->buffer_pos - offs;
+ return wr->buffer[pos];
+ } else {
+ uint32_t pos = wr->buffer_pos - offs;
+ while (pos >= wr->header->dict_size)
+ pos += wr->header->dict_size;
+ return wr->buffer[pos];
+ }
+
+}
+
+static inline void INIT write_byte(struct writer *wr, uint8_t byte)
+{
+ wr->buffer[wr->buffer_pos++] = wr->previous_byte = byte;
+ if (wr->flush && wr->buffer_pos == wr->header->dict_size) {
+ wr->buffer_pos = 0;
+ wr->global_pos += wr->header->dict_size;
+ wr->flush((char *)wr->buffer, wr->header->dict_size);
+ }
+}
+
+
+static inline void INIT copy_byte(struct writer *wr, uint32_t offs)
+{
+ write_byte(wr, peek_old_byte(wr, offs));
+}
+
+static inline void INIT copy_bytes(struct writer *wr,
+ uint32_t rep0, int len)
+{
+ do {
+ copy_byte(wr, rep0);
+ len--;
+ } while (len != 0 && wr->buffer_pos < wr->header->dst_size);
+}
+
+static inline void INIT process_bit0(struct writer *wr, struct rc *rc,
+ struct cstate *cst, uint16_t *p,
+ int pos_state, uint16_t *prob,
+ int lc, uint32_t literal_pos_mask) {
+ int mi = 1;
+ rc_update_bit_0(rc, prob);
+ prob = (p + LZMA_LITERAL +
+ (LZMA_LIT_SIZE
+ * (((get_pos(wr) & literal_pos_mask) << lc)
+ + (wr->previous_byte >> (8 - lc))))
+ );
+
+ if (cst->state >= LZMA_NUM_LIT_STATES) {
+ int match_byte = peek_old_byte(wr, cst->rep0);
+ do {
+ int bit;
+ uint16_t *prob_lit;
+
+ match_byte <<= 1;
+ bit = match_byte & 0x100;
+ prob_lit = prob + 0x100 + bit + mi;
+ if (rc_get_bit(rc, prob_lit, &mi)) {
+ if (!bit)
+ break;
+ } else {
+ if (bit)
+ break;
+ }
+ } while (mi < 0x100);
+ }
+ while (mi < 0x100) {
+ uint16_t *prob_lit = prob + mi;
+ rc_get_bit(rc, prob_lit, &mi);
+ }
+ write_byte(wr, mi);
+ if (cst->state < 4)
+ cst->state = 0;
+ else if (cst->state < 10)
+ cst->state -= 3;
+ else
+ cst->state -= 6;
+}
+
+static inline void INIT process_bit1(struct writer *wr, struct rc *rc,
+ struct cstate *cst, uint16_t *p,
+ int pos_state, uint16_t *prob) {
+ int offset;
+ uint16_t *prob_len;
+ int num_bits;
+ int len;
+
+ rc_update_bit_1(rc, prob);
+ prob = p + LZMA_IS_REP + cst->state;
+ if (rc_is_bit_0(rc, prob)) {
+ rc_update_bit_0(rc, prob);
+ cst->rep3 = cst->rep2;
+ cst->rep2 = cst->rep1;
+ cst->rep1 = cst->rep0;
+ cst->state = cst->state < LZMA_NUM_LIT_STATES ? 0 : 3;
+ prob = p + LZMA_LEN_CODER;
+ } else {
+ rc_update_bit_1(rc, prob);
+ prob = p + LZMA_IS_REP_G0 + cst->state;
+ if (rc_is_bit_0(rc, prob)) {
+ rc_update_bit_0(rc, prob);
+ prob = (p + LZMA_IS_REP_0_LONG
+ + (cst->state <<
+ LZMA_NUM_POS_BITS_MAX) +
+ pos_state);
+ if (rc_is_bit_0(rc, prob)) {
+ rc_update_bit_0(rc, prob);
+
+ cst->state = cst->state < LZMA_NUM_LIT_STATES ?
+ 9 : 11;
+ copy_byte(wr, cst->rep0);
+ return;
+ } else {
+ rc_update_bit_1(rc, prob);
+ }
+ } else {
+ uint32_t distance;
+
+ rc_update_bit_1(rc, prob);
+ prob = p + LZMA_IS_REP_G1 + cst->state;
+ if (rc_is_bit_0(rc, prob)) {
+ rc_update_bit_0(rc, prob);
+ distance = cst->rep1;
+ } else {
+ rc_update_bit_1(rc, prob);
+ prob = p + LZMA_IS_REP_G2 + cst->state;
+ if (rc_is_bit_0(rc, prob)) {
+ rc_update_bit_0(rc, prob);
+ distance = cst->rep2;
+ } else {
+ rc_update_bit_1(rc, prob);
+ distance = cst->rep3;
+ cst->rep3 = cst->rep2;
+ }
+ cst->rep2 = cst->rep1;
+ }
+ cst->rep1 = cst->rep0;
+ cst->rep0 = distance;
+ }
+ cst->state = cst->state < LZMA_NUM_LIT_STATES ? 8 : 11;
+ prob = p + LZMA_REP_LEN_CODER;
+ }
+
+ prob_len = prob + LZMA_LEN_CHOICE;
+ if (rc_is_bit_0(rc, prob_len)) {
+ rc_update_bit_0(rc, prob_len);
+ prob_len = (prob + LZMA_LEN_LOW
+ + (pos_state <<
+ LZMA_LEN_NUM_LOW_BITS));
+ offset = 0;
+ num_bits = LZMA_LEN_NUM_LOW_BITS;
+ } else {
+ rc_update_bit_1(rc, prob_len);
+ prob_len = prob + LZMA_LEN_CHOICE_2;
+ if (rc_is_bit_0(rc, prob_len)) {
+ rc_update_bit_0(rc, prob_len);
+ prob_len = (prob + LZMA_LEN_MID
+ + (pos_state <<
+ LZMA_LEN_NUM_MID_BITS));
+ offset = 1 << LZMA_LEN_NUM_LOW_BITS;
+ num_bits = LZMA_LEN_NUM_MID_BITS;
+ } else {
+ rc_update_bit_1(rc, prob_len);
+ prob_len = prob + LZMA_LEN_HIGH;
+ offset = ((1 << LZMA_LEN_NUM_LOW_BITS)
+ + (1 << LZMA_LEN_NUM_MID_BITS));
+ num_bits = LZMA_LEN_NUM_HIGH_BITS;
+ }
+ }
+
+ rc_bit_tree_decode(rc, prob_len, num_bits, &len);
+ len += offset;
+
+ if (cst->state < 4) {
+ int pos_slot;
+
+ cst->state += LZMA_NUM_LIT_STATES;
+ prob =
+ p + LZMA_POS_SLOT +
+ ((len <
+ LZMA_NUM_LEN_TO_POS_STATES ? len :
+ LZMA_NUM_LEN_TO_POS_STATES - 1)
+ << LZMA_NUM_POS_SLOT_BITS);
+ rc_bit_tree_decode(rc, prob,
+ LZMA_NUM_POS_SLOT_BITS,
+ &pos_slot);
+ if (pos_slot >= LZMA_START_POS_MODEL_INDEX) {
+ int i, mi;
+ num_bits = (pos_slot >> 1) - 1;
+ cst->rep0 = 2 | (pos_slot & 1);
+ if (pos_slot < LZMA_END_POS_MODEL_INDEX) {
+ cst->rep0 <<= num_bits;
+ prob = p + LZMA_SPEC_POS +
+ cst->rep0 - pos_slot - 1;
+ } else {
+ num_bits -= LZMA_NUM_ALIGN_BITS;
+ while (num_bits--)
+ cst->rep0 = (cst->rep0 << 1) |
+ rc_direct_bit(rc);
+ prob = p + LZMA_ALIGN;
+ cst->rep0 <<= LZMA_NUM_ALIGN_BITS;
+ num_bits = LZMA_NUM_ALIGN_BITS;
+ }
+ i = 1;
+ mi = 1;
+ while (num_bits--) {
+ if (rc_get_bit(rc, prob + mi, &mi))
+ cst->rep0 |= i;
+ i <<= 1;
+ }
+ } else
+ cst->rep0 = pos_slot;
+ if (++(cst->rep0) == 0)
+ return;
+ }
+
+ len += LZMA_MATCH_MIN_LEN;
+
+ copy_bytes(wr, cst->rep0, len);
+}
+
+
+
+STATIC inline int INIT unlzma(unsigned char *buf, int in_len,
+ int(*fill)(void*, unsigned int),
+ int(*flush)(void*, unsigned int),
+ unsigned char *output,
+ int *posp,
+ void(*error_fn)(char *x)
+ )
+{
+ struct lzma_header header;
+ int lc, pb, lp;
+ uint32_t pos_state_mask;
+ uint32_t literal_pos_mask;
+ uint16_t *p;
+ int num_probs;
+ struct rc rc;
+ int i, mi;
+ struct writer wr;
+ struct cstate cst;
+ unsigned char *inbuf;
+ int ret = -1;
+
+ set_error_fn(error_fn);
+ if (!flush)
+ in_len -= 4; /* Uncompressed size hack active in pre-boot
+ environment */
+ if (buf)
+ inbuf = buf;
+ else
+ inbuf = malloc(LZMA_IOBUF_SIZE);
+ if (!inbuf) {
+ error("Could not allocate input bufer");
+ goto exit_0;
+ }
+
+ cst.state = 0;
+ cst.rep0 = cst.rep1 = cst.rep2 = cst.rep3 = 1;
+
+ wr.header = &header;
+ wr.flush = flush;
+ wr.global_pos = 0;
+ wr.previous_byte = 0;
+ wr.buffer_pos = 0;
+
+ rc_init(&rc, fill, inbuf, in_len);
+
+ for (i = 0; i < sizeof(header); i++) {
+ if (rc.ptr >= rc.buffer_end)
+ rc_read(&rc);
+ ((unsigned char *)&header)[i] = *rc.ptr++;
+ }
+
+ if (header.pos >= (9 * 5 * 5))
+ error("bad header");
+
+ mi = 0;
+ lc = header.pos;
+ while (lc >= 9) {
+ mi++;
+ lc -= 9;
+ }
+ pb = 0;
+ lp = mi;
+ while (lp >= 5) {
+ pb++;
+ lp -= 5;
+ }
+ pos_state_mask = (1 << pb) - 1;
+ literal_pos_mask = (1 << lp) - 1;
+
+ ENDIAN_CONVERT(header.dict_size);
+ ENDIAN_CONVERT(header.dst_size);
+
+ if (header.dict_size == 0)
+ header.dict_size = 1;
+
+ if (output)
+ wr.buffer = output;
+ else {
+ wr.bufsize = MIN(header.dst_size, header.dict_size);
+ wr.buffer = large_malloc(wr.bufsize);
+ }
+ if (wr.buffer == NULL)
+ goto exit_1;
+
+ num_probs = LZMA_BASE_SIZE + (LZMA_LIT_SIZE << (lc + lp));
+ p = (uint16_t *) large_malloc(num_probs * sizeof(*p));
+ if (p == 0)
+ goto exit_2;
+ num_probs = LZMA_LITERAL + (LZMA_LIT_SIZE << (lc + lp));
+ for (i = 0; i < num_probs; i++)
+ p[i] = (1 << RC_MODEL_TOTAL_BITS) >> 1;
+
+ rc_init_code(&rc);
+
+ while (get_pos(&wr) < header.dst_size) {
+ int pos_state = get_pos(&wr) & pos_state_mask;
+ uint16_t *prob = p + LZMA_IS_MATCH +
+ (cst.state << LZMA_NUM_POS_BITS_MAX) + pos_state;
+ if (rc_is_bit_0(&rc, prob))
+ process_bit0(&wr, &rc, &cst, p, pos_state, prob,
+ lc, literal_pos_mask);
+ else {
+ process_bit1(&wr, &rc, &cst, p, pos_state, prob);
+ if (cst.rep0 == 0)
+ break;
+ }
+ }
+
+ if (posp)
+ *posp = rc.ptr-rc.buffer;
+ if (wr.flush)
+ wr.flush(wr.buffer, wr.buffer_pos);
+ ret = 0;
+ large_free(p);
+exit_2:
+ if (!output)
+ large_free(wr.buffer);
+exit_1:
+ if (!buf)
+ free(inbuf);
+exit_0:
+ return ret;
+}
+
+#define decompress unlzma
n = idp->layers * IDR_BITS;
p = idp->top;
+ rcu_assign_pointer(idp->top, NULL);
max = 1 << n;
id = 0;
p = *--paa;
}
}
- rcu_assign_pointer(idp->top, NULL);
idp->layers = 0;
}
EXPORT_SYMBOL(idr_remove_all);
+#ifndef INFLATE_H
+#define INFLATE_H
+
/* inflate.h -- internal inflate state definition
* Copyright (C) 1995-2004 Mark Adler
* For conditions of distribution and use, see copyright notice in zlib.h
unsigned short work[288]; /* work area for code table building */
code codes[ENOUGH]; /* space for code tables */
};
+#endif
+#ifndef INFTREES_H
+#define INFTREES_H
+
/* inftrees.h -- header to use inftrees.c
* Copyright (C) 1995-2005 Mark Adler
* For conditions of distribution and use, see copyright notice in zlib.h
extern int zlib_inflate_table (codetype type, unsigned short *lens,
unsigned codes, code **table,
unsigned *bits, unsigned short *work);
+#endif
obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
obj-$(CONFIG_FS_XIP) += filemap_xip.o
obj-$(CONFIG_MIGRATION) += migrate.o
+ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+obj-$(CONFIG_SMP) += percpu.o
+else
obj-$(CONFIG_SMP) += allocpercpu.o
+endif
obj-$(CONFIG_QUICKLIST) += quicklist.o
obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
__percpu_populate_mask((__pdata), (size), (gfp), &(mask))
/**
- * percpu_alloc_mask - initial setup of per-cpu data
+ * alloc_percpu - initial setup of per-cpu data
* @size: size of per-cpu object
- * @gfp: may sleep or not etc.
- * @mask: populate per-data for cpu's selected through mask bits
+ * @align: alignment
*
- * Populating per-cpu data for all online cpu's would be a typical use case,
- * which is simplified by the percpu_alloc() wrapper.
- * Per-cpu objects are populated with zeroed buffers.
+ * Allocate dynamic percpu area. Percpu objects are populated with
+ * zeroed buffers.
*/
-void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask)
+void *__alloc_percpu(size_t size, size_t align)
{
/*
* We allocate whole cache lines to avoid false sharing
*/
size_t sz = roundup(nr_cpu_ids * sizeof(void *), cache_line_size());
- void *pdata = kzalloc(sz, gfp);
+ void *pdata = kzalloc(sz, GFP_KERNEL);
void *__pdata = __percpu_disguise(pdata);
+ /*
+ * Can't easily make larger alignment work with kmalloc. WARN
+ * on it. Larger alignment should only be used for module
+ * percpu sections on SMP for which this path isn't used.
+ */
+ WARN_ON_ONCE(align > __alignof__(unsigned long long));
+
if (unlikely(!pdata))
return NULL;
- if (likely(!__percpu_populate_mask(__pdata, size, gfp, mask)))
+ if (likely(!__percpu_populate_mask(__pdata, size, GFP_KERNEL,
+ &cpu_possible_map)))
return __pdata;
kfree(pdata);
return NULL;
}
-EXPORT_SYMBOL_GPL(__percpu_alloc_mask);
+EXPORT_SYMBOL_GPL(__alloc_percpu);
/**
- * percpu_free - final cleanup of per-cpu data
+ * free_percpu - final cleanup of per-cpu data
* @__pdata: object to clean up
*
* We simply clean up any per-cpu object left. No need for the client to
* track and specify through a bis mask which per-cpu objects are to free.
*/
-void percpu_free(void *__pdata)
+void free_percpu(void *__pdata)
{
if (unlikely(!__pdata))
return;
__percpu_depopulate_mask(__pdata, &cpu_possible_map);
kfree(__percpu_disguise(__pdata));
}
-EXPORT_SYMBOL_GPL(percpu_free);
+EXPORT_SYMBOL_GPL(free_percpu);
return mark_bootmem_node(pgdat->bdata, start, end, 1, flags);
}
-#ifndef CONFIG_HAVE_ARCH_BOOTMEM_NODE
/**
* reserve_bootmem - mark a page range as usable
* @addr: starting address of the range
return mark_bootmem(start, end, 1, flags);
}
-#endif /* !CONFIG_HAVE_ARCH_BOOTMEM_NODE */
static unsigned long align_idx(struct bootmem_data *bdata, unsigned long idx,
unsigned long step)
}
static void * __init alloc_bootmem_core(struct bootmem_data *bdata,
- unsigned long size, unsigned long align,
- unsigned long goal, unsigned long limit)
+ unsigned long size, unsigned long align,
+ unsigned long goal, unsigned long limit)
{
unsigned long fallback = 0;
unsigned long min, max, start, sidx, midx, step;
return NULL;
}
+static void * __init alloc_arch_preferred_bootmem(bootmem_data_t *bdata,
+ unsigned long size, unsigned long align,
+ unsigned long goal, unsigned long limit)
+{
+#ifdef CONFIG_HAVE_ARCH_BOOTMEM
+ bootmem_data_t *p_bdata;
+
+ p_bdata = bootmem_arch_preferred_node(bdata, size, align, goal, limit);
+ if (p_bdata)
+ return alloc_bootmem_core(p_bdata, size, align, goal, limit);
+#endif
+ return NULL;
+}
+
static void * __init ___alloc_bootmem_nopanic(unsigned long size,
unsigned long align,
unsigned long goal,
unsigned long limit)
{
bootmem_data_t *bdata;
+ void *region;
restart:
- list_for_each_entry(bdata, &bdata_list, list) {
- void *region;
+ region = alloc_arch_preferred_bootmem(NULL, size, align, goal, limit);
+ if (region)
+ return region;
+ list_for_each_entry(bdata, &bdata_list, list) {
if (goal && bdata->node_low_pfn <= PFN_DOWN(goal))
continue;
if (limit && bdata->node_min_pfn >= PFN_DOWN(limit))
{
void *ptr;
+ ptr = alloc_arch_preferred_bootmem(bdata, size, align, goal, limit);
+ if (ptr)
+ return ptr;
+
ptr = alloc_bootmem_core(bdata, size, align, goal, limit);
if (ptr)
return ptr;
{
void *ptr;
+ ptr = alloc_arch_preferred_bootmem(pgdat->bdata, size, align, goal, 0);
+ if (ptr)
+ return ptr;
+
ptr = alloc_bootmem_core(pgdat->bdata, size, align, goal, 0);
if (ptr)
return ptr;
static size_t __iovec_copy_from_user_inatomic(char *vaddr,
const struct iovec *iov, size_t base, size_t bytes)
{
- size_t copied = 0, left = 0, total = bytes;
+ size_t copied = 0, left = 0;
while (bytes) {
char __user *buf = iov->iov_base + base;
int copy = min(bytes, iov->iov_len - base);
base = 0;
- left = __copy_from_user_inatomic_nocache(vaddr, buf, copy, total);
+ left = __copy_from_user_inatomic(vaddr, buf, copy);
copied += copy;
bytes -= copy;
vaddr += copy;
if (likely(i->nr_segs == 1)) {
int left;
char __user *buf = i->iov->iov_base + i->iov_offset;
-
- left = __copy_from_user_inatomic_nocache(kaddr + offset,
- buf, bytes, bytes);
+ left = __copy_from_user_inatomic(kaddr + offset, buf, bytes);
copied = bytes - left;
} else {
copied = __iovec_copy_from_user_inatomic(kaddr + offset,
if (likely(i->nr_segs == 1)) {
int left;
char __user *buf = i->iov->iov_base + i->iov_offset;
-
- left = __copy_from_user_nocache(kaddr + offset, buf, bytes, bytes);
+ left = __copy_from_user(kaddr + offset, buf, bytes);
copied = bytes - left;
} else {
copied = __iovec_copy_from_user_inatomic(kaddr + offset,
break;
copied = bytes -
- __copy_from_user_nocache(xip_mem + offset, buf, bytes, bytes);
+ __copy_from_user_nocache(xip_mem + offset, buf, bytes);
if (likely(copied > 0)) {
status = copied;
--- /dev/null
+/*
+ * linux/mm/percpu.c - percpu memory allocator
+ *
+ * Copyright (C) 2009 SUSE Linux Products GmbH
+ * Copyright (C) 2009 Tejun Heo <tj@kernel.org>
+ *
+ * This file is released under the GPLv2.
+ *
+ * This is percpu allocator which can handle both static and dynamic
+ * areas. Percpu areas are allocated in chunks in vmalloc area. Each
+ * chunk is consisted of num_possible_cpus() units and the first chunk
+ * is used for static percpu variables in the kernel image (special
+ * boot time alloc/init handling necessary as these areas need to be
+ * brought up before allocation services are running). Unit grows as
+ * necessary and all units grow or shrink in unison. When a chunk is
+ * filled up, another chunk is allocated. ie. in vmalloc area
+ *
+ * c0 c1 c2
+ * ------------------- ------------------- ------------
+ * | u0 | u1 | u2 | u3 | | u0 | u1 | u2 | u3 | | u0 | u1 | u
+ * ------------------- ...... ------------------- .... ------------
+ *
+ * Allocation is done in offset-size areas of single unit space. Ie,
+ * an area of 512 bytes at 6k in c1 occupies 512 bytes at 6k of c1:u0,
+ * c1:u1, c1:u2 and c1:u3. Percpu access can be done by configuring
+ * percpu base registers UNIT_SIZE apart.
+ *
+ * There are usually many small percpu allocations many of them as
+ * small as 4 bytes. The allocator organizes chunks into lists
+ * according to free size and tries to allocate from the fullest one.
+ * Each chunk keeps the maximum contiguous area size hint which is
+ * guaranteed to be eqaul to or larger than the maximum contiguous
+ * area in the chunk. This helps the allocator not to iterate the
+ * chunk maps unnecessarily.
+ *
+ * Allocation state in each chunk is kept using an array of integers
+ * on chunk->map. A positive value in the map represents a free
+ * region and negative allocated. Allocation inside a chunk is done
+ * by scanning this map sequentially and serving the first matching
+ * entry. This is mostly copied from the percpu_modalloc() allocator.
+ * Chunks are also linked into a rb tree to ease address to chunk
+ * mapping during free.
+ *
+ * To use this allocator, arch code should do the followings.
+ *
+ * - define CONFIG_HAVE_DYNAMIC_PER_CPU_AREA
+ *
+ * - define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr() to translate
+ * regular address to percpu pointer and back
+ *
+ * - use pcpu_setup_first_chunk() during percpu area initialization to
+ * setup the first chunk containing the kernel static percpu area
+ */
+
+#include <linux/bitmap.h>
+#include <linux/bootmem.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/percpu.h>
+#include <linux/pfn.h>
+#include <linux/rbtree.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/vmalloc.h>
+#include <linux/workqueue.h>
+
+#include <asm/cacheflush.h>
+#include <asm/tlbflush.h>
+
+#define PCPU_SLOT_BASE_SHIFT 5 /* 1-31 shares the same slot */
+#define PCPU_DFL_MAP_ALLOC 16 /* start a map with 16 ents */
+
+struct pcpu_chunk {
+ struct list_head list; /* linked to pcpu_slot lists */
+ struct rb_node rb_node; /* key is chunk->vm->addr */
+ int free_size; /* free bytes in the chunk */
+ int contig_hint; /* max contiguous size hint */
+ struct vm_struct *vm; /* mapped vmalloc region */
+ int map_used; /* # of map entries used */
+ int map_alloc; /* # of map entries allocated */
+ int *map; /* allocation map */
+ bool immutable; /* no [de]population allowed */
+ struct page **page; /* points to page array */
+ struct page *page_ar[]; /* #cpus * UNIT_PAGES */
+};
+
+static int pcpu_unit_pages __read_mostly;
+static int pcpu_unit_size __read_mostly;
+static int pcpu_chunk_size __read_mostly;
+static int pcpu_nr_slots __read_mostly;
+static size_t pcpu_chunk_struct_size __read_mostly;
+
+/* the address of the first chunk which starts with the kernel static area */
+void *pcpu_base_addr __read_mostly;
+EXPORT_SYMBOL_GPL(pcpu_base_addr);
+
+/* optional reserved chunk, only accessible for reserved allocations */
+static struct pcpu_chunk *pcpu_reserved_chunk;
+/* offset limit of the reserved chunk */
+static int pcpu_reserved_chunk_limit;
+
+/*
+ * Synchronization rules.
+ *
+ * There are two locks - pcpu_alloc_mutex and pcpu_lock. The former
+ * protects allocation/reclaim paths, chunks and chunk->page arrays.
+ * The latter is a spinlock and protects the index data structures -
+ * chunk slots, rbtree, chunks and area maps in chunks.
+ *
+ * During allocation, pcpu_alloc_mutex is kept locked all the time and
+ * pcpu_lock is grabbed and released as necessary. All actual memory
+ * allocations are done using GFP_KERNEL with pcpu_lock released.
+ *
+ * Free path accesses and alters only the index data structures, so it
+ * can be safely called from atomic context. When memory needs to be
+ * returned to the system, free path schedules reclaim_work which
+ * grabs both pcpu_alloc_mutex and pcpu_lock, unlinks chunks to be
+ * reclaimed, release both locks and frees the chunks. Note that it's
+ * necessary to grab both locks to remove a chunk from circulation as
+ * allocation path might be referencing the chunk with only
+ * pcpu_alloc_mutex locked.
+ */
+static DEFINE_MUTEX(pcpu_alloc_mutex); /* protects whole alloc and reclaim */
+static DEFINE_SPINLOCK(pcpu_lock); /* protects index data structures */
+
+static struct list_head *pcpu_slot __read_mostly; /* chunk list slots */
+static struct rb_root pcpu_addr_root = RB_ROOT; /* chunks by address */
+
+/* reclaim work to release fully free chunks, scheduled from free path */
+static void pcpu_reclaim(struct work_struct *work);
+static DECLARE_WORK(pcpu_reclaim_work, pcpu_reclaim);
+
+static int __pcpu_size_to_slot(int size)
+{
+ int highbit = fls(size); /* size is in bytes */
+ return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1);
+}
+
+static int pcpu_size_to_slot(int size)
+{
+ if (size == pcpu_unit_size)
+ return pcpu_nr_slots - 1;
+ return __pcpu_size_to_slot(size);
+}
+
+static int pcpu_chunk_slot(const struct pcpu_chunk *chunk)
+{
+ if (chunk->free_size < sizeof(int) || chunk->contig_hint < sizeof(int))
+ return 0;
+
+ return pcpu_size_to_slot(chunk->free_size);
+}
+
+static int pcpu_page_idx(unsigned int cpu, int page_idx)
+{
+ return cpu * pcpu_unit_pages + page_idx;
+}
+
+static struct page **pcpu_chunk_pagep(struct pcpu_chunk *chunk,
+ unsigned int cpu, int page_idx)
+{
+ return &chunk->page[pcpu_page_idx(cpu, page_idx)];
+}
+
+static unsigned long pcpu_chunk_addr(struct pcpu_chunk *chunk,
+ unsigned int cpu, int page_idx)
+{
+ return (unsigned long)chunk->vm->addr +
+ (pcpu_page_idx(cpu, page_idx) << PAGE_SHIFT);
+}
+
+static bool pcpu_chunk_page_occupied(struct pcpu_chunk *chunk,
+ int page_idx)
+{
+ return *pcpu_chunk_pagep(chunk, 0, page_idx) != NULL;
+}
+
+/**
+ * pcpu_mem_alloc - allocate memory
+ * @size: bytes to allocate
+ *
+ * Allocate @size bytes. If @size is smaller than PAGE_SIZE,
+ * kzalloc() is used; otherwise, vmalloc() is used. The returned
+ * memory is always zeroed.
+ *
+ * CONTEXT:
+ * Does GFP_KERNEL allocation.
+ *
+ * RETURNS:
+ * Pointer to the allocated area on success, NULL on failure.
+ */
+static void *pcpu_mem_alloc(size_t size)
+{
+ if (size <= PAGE_SIZE)
+ return kzalloc(size, GFP_KERNEL);
+ else {
+ void *ptr = vmalloc(size);
+ if (ptr)
+ memset(ptr, 0, size);
+ return ptr;
+ }
+}
+
+/**
+ * pcpu_mem_free - free memory
+ * @ptr: memory to free
+ * @size: size of the area
+ *
+ * Free @ptr. @ptr should have been allocated using pcpu_mem_alloc().
+ */
+static void pcpu_mem_free(void *ptr, size_t size)
+{
+ if (size <= PAGE_SIZE)
+ kfree(ptr);
+ else
+ vfree(ptr);
+}
+
+/**
+ * pcpu_chunk_relocate - put chunk in the appropriate chunk slot
+ * @chunk: chunk of interest
+ * @oslot: the previous slot it was on
+ *
+ * This function is called after an allocation or free changed @chunk.
+ * New slot according to the changed state is determined and @chunk is
+ * moved to the slot. Note that the reserved chunk is never put on
+ * chunk slots.
+ *
+ * CONTEXT:
+ * pcpu_lock.
+ */
+static void pcpu_chunk_relocate(struct pcpu_chunk *chunk, int oslot)
+{
+ int nslot = pcpu_chunk_slot(chunk);
+
+ if (chunk != pcpu_reserved_chunk && oslot != nslot) {
+ if (oslot < nslot)
+ list_move(&chunk->list, &pcpu_slot[nslot]);
+ else
+ list_move_tail(&chunk->list, &pcpu_slot[nslot]);
+ }
+}
+
+static struct rb_node **pcpu_chunk_rb_search(void *addr,
+ struct rb_node **parentp)
+{
+ struct rb_node **p = &pcpu_addr_root.rb_node;
+ struct rb_node *parent = NULL;
+ struct pcpu_chunk *chunk;
+
+ while (*p) {
+ parent = *p;
+ chunk = rb_entry(parent, struct pcpu_chunk, rb_node);
+
+ if (addr < chunk->vm->addr)
+ p = &(*p)->rb_left;
+ else if (addr > chunk->vm->addr)
+ p = &(*p)->rb_right;
+ else
+ break;
+ }
+
+ if (parentp)
+ *parentp = parent;
+ return p;
+}
+
+/**
+ * pcpu_chunk_addr_search - search for chunk containing specified address
+ * @addr: address to search for
+ *
+ * Look for chunk which might contain @addr. More specifically, it
+ * searchs for the chunk with the highest start address which isn't
+ * beyond @addr.
+ *
+ * CONTEXT:
+ * pcpu_lock.
+ *
+ * RETURNS:
+ * The address of the found chunk.
+ */
+static struct pcpu_chunk *pcpu_chunk_addr_search(void *addr)
+{
+ struct rb_node *n, *parent;
+ struct pcpu_chunk *chunk;
+
+ /* is it in the reserved chunk? */
+ if (pcpu_reserved_chunk) {
+ void *start = pcpu_reserved_chunk->vm->addr;
+
+ if (addr >= start && addr < start + pcpu_reserved_chunk_limit)
+ return pcpu_reserved_chunk;
+ }
+
+ /* nah... search the regular ones */
+ n = *pcpu_chunk_rb_search(addr, &parent);
+ if (!n) {
+ /* no exactly matching chunk, the parent is the closest */
+ n = parent;
+ BUG_ON(!n);
+ }
+ chunk = rb_entry(n, struct pcpu_chunk, rb_node);
+
+ if (addr < chunk->vm->addr) {
+ /* the parent was the next one, look for the previous one */
+ n = rb_prev(n);
+ BUG_ON(!n);
+ chunk = rb_entry(n, struct pcpu_chunk, rb_node);
+ }
+
+ return chunk;
+}
+
+/**
+ * pcpu_chunk_addr_insert - insert chunk into address rb tree
+ * @new: chunk to insert
+ *
+ * Insert @new into address rb tree.
+ *
+ * CONTEXT:
+ * pcpu_lock.
+ */
+static void pcpu_chunk_addr_insert(struct pcpu_chunk *new)
+{
+ struct rb_node **p, *parent;
+
+ p = pcpu_chunk_rb_search(new->vm->addr, &parent);
+ BUG_ON(*p);
+ rb_link_node(&new->rb_node, parent, p);
+ rb_insert_color(&new->rb_node, &pcpu_addr_root);
+}
+
+/**
+ * pcpu_extend_area_map - extend area map for allocation
+ * @chunk: target chunk
+ *
+ * Extend area map of @chunk so that it can accomodate an allocation.
+ * A single allocation can split an area into three areas, so this
+ * function makes sure that @chunk->map has at least two extra slots.
+ *
+ * CONTEXT:
+ * pcpu_alloc_mutex, pcpu_lock. pcpu_lock is released and reacquired
+ * if area map is extended.
+ *
+ * RETURNS:
+ * 0 if noop, 1 if successfully extended, -errno on failure.
+ */
+static int pcpu_extend_area_map(struct pcpu_chunk *chunk)
+{
+ int new_alloc;
+ int *new;
+ size_t size;
+
+ /* has enough? */
+ if (chunk->map_alloc >= chunk->map_used + 2)
+ return 0;
+
+ spin_unlock_irq(&pcpu_lock);
+
+ new_alloc = PCPU_DFL_MAP_ALLOC;
+ while (new_alloc < chunk->map_used + 2)
+ new_alloc *= 2;
+
+ new = pcpu_mem_alloc(new_alloc * sizeof(new[0]));
+ if (!new) {
+ spin_lock_irq(&pcpu_lock);
+ return -ENOMEM;
+ }
+
+ /*
+ * Acquire pcpu_lock and switch to new area map. Only free
+ * could have happened inbetween, so map_used couldn't have
+ * grown.
+ */
+ spin_lock_irq(&pcpu_lock);
+ BUG_ON(new_alloc < chunk->map_used + 2);
+
+ size = chunk->map_alloc * sizeof(chunk->map[0]);
+ memcpy(new, chunk->map, size);
+
+ /*
+ * map_alloc < PCPU_DFL_MAP_ALLOC indicates that the chunk is
+ * one of the first chunks and still using static map.
+ */
+ if (chunk->map_alloc >= PCPU_DFL_MAP_ALLOC)
+ pcpu_mem_free(chunk->map, size);
+
+ chunk->map_alloc = new_alloc;
+ chunk->map = new;
+ return 0;
+}
+
+/**
+ * pcpu_split_block - split a map block
+ * @chunk: chunk of interest
+ * @i: index of map block to split
+ * @head: head size in bytes (can be 0)
+ * @tail: tail size in bytes (can be 0)
+ *
+ * Split the @i'th map block into two or three blocks. If @head is
+ * non-zero, @head bytes block is inserted before block @i moving it
+ * to @i+1 and reducing its size by @head bytes.
+ *
+ * If @tail is non-zero, the target block, which can be @i or @i+1
+ * depending on @head, is reduced by @tail bytes and @tail byte block
+ * is inserted after the target block.
+ *
+ * @chunk->map must have enough free slots to accomodate the split.
+ *
+ * CONTEXT:
+ * pcpu_lock.
+ */
+static void pcpu_split_block(struct pcpu_chunk *chunk, int i,
+ int head, int tail)
+{
+ int nr_extra = !!head + !!tail;
+
+ BUG_ON(chunk->map_alloc < chunk->map_used + nr_extra);
+
+ /* insert new subblocks */
+ memmove(&chunk->map[i + nr_extra], &chunk->map[i],
+ sizeof(chunk->map[0]) * (chunk->map_used - i));
+ chunk->map_used += nr_extra;
+
+ if (head) {
+ chunk->map[i + 1] = chunk->map[i] - head;
+ chunk->map[i++] = head;
+ }
+ if (tail) {
+ chunk->map[i++] -= tail;
+ chunk->map[i] = tail;
+ }
+}
+
+/**
+ * pcpu_alloc_area - allocate area from a pcpu_chunk
+ * @chunk: chunk of interest
+ * @size: wanted size in bytes
+ * @align: wanted align
+ *
+ * Try to allocate @size bytes area aligned at @align from @chunk.
+ * Note that this function only allocates the offset. It doesn't
+ * populate or map the area.
+ *
+ * @chunk->map must have at least two free slots.
+ *
+ * CONTEXT:
+ * pcpu_lock.
+ *
+ * RETURNS:
+ * Allocated offset in @chunk on success, -1 if no matching area is
+ * found.
+ */
+static int pcpu_alloc_area(struct pcpu_chunk *chunk, int size, int align)
+{
+ int oslot = pcpu_chunk_slot(chunk);
+ int max_contig = 0;
+ int i, off;
+
+ for (i = 0, off = 0; i < chunk->map_used; off += abs(chunk->map[i++])) {
+ bool is_last = i + 1 == chunk->map_used;
+ int head, tail;
+
+ /* extra for alignment requirement */
+ head = ALIGN(off, align) - off;
+ BUG_ON(i == 0 && head != 0);
+
+ if (chunk->map[i] < 0)
+ continue;
+ if (chunk->map[i] < head + size) {
+ max_contig = max(chunk->map[i], max_contig);
+ continue;
+ }
+
+ /*
+ * If head is small or the previous block is free,
+ * merge'em. Note that 'small' is defined as smaller
+ * than sizeof(int), which is very small but isn't too
+ * uncommon for percpu allocations.
+ */
+ if (head && (head < sizeof(int) || chunk->map[i - 1] > 0)) {
+ if (chunk->map[i - 1] > 0)
+ chunk->map[i - 1] += head;
+ else {
+ chunk->map[i - 1] -= head;
+ chunk->free_size -= head;
+ }
+ chunk->map[i] -= head;
+ off += head;
+ head = 0;
+ }
+
+ /* if tail is small, just keep it around */
+ tail = chunk->map[i] - head - size;
+ if (tail < sizeof(int))
+ tail = 0;
+
+ /* split if warranted */
+ if (head || tail) {
+ pcpu_split_block(chunk, i, head, tail);
+ if (head) {
+ i++;
+ off += head;
+ max_contig = max(chunk->map[i - 1], max_contig);
+ }
+ if (tail)
+ max_contig = max(chunk->map[i + 1], max_contig);
+ }
+
+ /* update hint and mark allocated */
+ if (is_last)
+ chunk->contig_hint = max_contig; /* fully scanned */
+ else
+ chunk->contig_hint = max(chunk->contig_hint,
+ max_contig);
+
+ chunk->free_size -= chunk->map[i];
+ chunk->map[i] = -chunk->map[i];
+
+ pcpu_chunk_relocate(chunk, oslot);
+ return off;
+ }
+
+ chunk->contig_hint = max_contig; /* fully scanned */
+ pcpu_chunk_relocate(chunk, oslot);
+
+ /* tell the upper layer that this chunk has no matching area */
+ return -1;
+}
+
+/**
+ * pcpu_free_area - free area to a pcpu_chunk
+ * @chunk: chunk of interest
+ * @freeme: offset of area to free
+ *
+ * Free area starting from @freeme to @chunk. Note that this function
+ * only modifies the allocation map. It doesn't depopulate or unmap
+ * the area.
+ *
+ * CONTEXT:
+ * pcpu_lock.
+ */
+static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme)
+{
+ int oslot = pcpu_chunk_slot(chunk);
+ int i, off;
+
+ for (i = 0, off = 0; i < chunk->map_used; off += abs(chunk->map[i++]))
+ if (off == freeme)
+ break;
+ BUG_ON(off != freeme);
+ BUG_ON(chunk->map[i] > 0);
+
+ chunk->map[i] = -chunk->map[i];
+ chunk->free_size += chunk->map[i];
+
+ /* merge with previous? */
+ if (i > 0 && chunk->map[i - 1] >= 0) {
+ chunk->map[i - 1] += chunk->map[i];
+ chunk->map_used--;
+ memmove(&chunk->map[i], &chunk->map[i + 1],
+ (chunk->map_used - i) * sizeof(chunk->map[0]));
+ i--;
+ }
+ /* merge with next? */
+ if (i + 1 < chunk->map_used && chunk->map[i + 1] >= 0) {
+ chunk->map[i] += chunk->map[i + 1];
+ chunk->map_used--;
+ memmove(&chunk->map[i + 1], &chunk->map[i + 2],
+ (chunk->map_used - (i + 1)) * sizeof(chunk->map[0]));
+ }
+
+ chunk->contig_hint = max(chunk->map[i], chunk->contig_hint);
+ pcpu_chunk_relocate(chunk, oslot);
+}
+
+/**
+ * pcpu_unmap - unmap pages out of a pcpu_chunk
+ * @chunk: chunk of interest
+ * @page_start: page index of the first page to unmap
+ * @page_end: page index of the last page to unmap + 1
+ * @flush: whether to flush cache and tlb or not
+ *
+ * For each cpu, unmap pages [@page_start,@page_end) out of @chunk.
+ * If @flush is true, vcache is flushed before unmapping and tlb
+ * after.
+ */
+static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
+ bool flush)
+{
+ unsigned int last = num_possible_cpus() - 1;
+ unsigned int cpu;
+
+ /* unmap must not be done on immutable chunk */
+ WARN_ON(chunk->immutable);
+
+ /*
+ * Each flushing trial can be very expensive, issue flush on
+ * the whole region at once rather than doing it for each cpu.
+ * This could be an overkill but is more scalable.
+ */
+ if (flush)
+ flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start),
+ pcpu_chunk_addr(chunk, last, page_end));
+
+ for_each_possible_cpu(cpu)
+ unmap_kernel_range_noflush(
+ pcpu_chunk_addr(chunk, cpu, page_start),
+ (page_end - page_start) << PAGE_SHIFT);
+
+ /* ditto as flush_cache_vunmap() */
+ if (flush)
+ flush_tlb_kernel_range(pcpu_chunk_addr(chunk, 0, page_start),
+ pcpu_chunk_addr(chunk, last, page_end));
+}
+
+/**
+ * pcpu_depopulate_chunk - depopulate and unmap an area of a pcpu_chunk
+ * @chunk: chunk to depopulate
+ * @off: offset to the area to depopulate
+ * @size: size of the area to depopulate in bytes
+ * @flush: whether to flush cache and tlb or not
+ *
+ * For each cpu, depopulate and unmap pages [@page_start,@page_end)
+ * from @chunk. If @flush is true, vcache is flushed before unmapping
+ * and tlb after.
+ *
+ * CONTEXT:
+ * pcpu_alloc_mutex.
+ */
+static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, int off, int size,
+ bool flush)
+{
+ int page_start = PFN_DOWN(off);
+ int page_end = PFN_UP(off + size);
+ int unmap_start = -1;
+ int uninitialized_var(unmap_end);
+ unsigned int cpu;
+ int i;
+
+ for (i = page_start; i < page_end; i++) {
+ for_each_possible_cpu(cpu) {
+ struct page **pagep = pcpu_chunk_pagep(chunk, cpu, i);
+
+ if (!*pagep)
+ continue;
+
+ __free_page(*pagep);
+
+ /*
+ * If it's partial depopulation, it might get
+ * populated or depopulated again. Mark the
+ * page gone.
+ */
+ *pagep = NULL;
+
+ unmap_start = unmap_start < 0 ? i : unmap_start;
+ unmap_end = i + 1;
+ }
+ }
+
+ if (unmap_start >= 0)
+ pcpu_unmap(chunk, unmap_start, unmap_end, flush);
+}
+
+/**
+ * pcpu_map - map pages into a pcpu_chunk
+ * @chunk: chunk of interest
+ * @page_start: page index of the first page to map
+ * @page_end: page index of the last page to map + 1
+ *
+ * For each cpu, map pages [@page_start,@page_end) into @chunk.
+ * vcache is flushed afterwards.
+ */
+static int pcpu_map(struct pcpu_chunk *chunk, int page_start, int page_end)
+{
+ unsigned int last = num_possible_cpus() - 1;
+ unsigned int cpu;
+ int err;
+
+ /* map must not be done on immutable chunk */
+ WARN_ON(chunk->immutable);
+
+ for_each_possible_cpu(cpu) {
+ err = map_kernel_range_noflush(
+ pcpu_chunk_addr(chunk, cpu, page_start),
+ (page_end - page_start) << PAGE_SHIFT,
+ PAGE_KERNEL,
+ pcpu_chunk_pagep(chunk, cpu, page_start));
+ if (err < 0)
+ return err;
+ }
+
+ /* flush at once, please read comments in pcpu_unmap() */
+ flush_cache_vmap(pcpu_chunk_addr(chunk, 0, page_start),
+ pcpu_chunk_addr(chunk, last, page_end));
+ return 0;
+}
+
+/**
+ * pcpu_populate_chunk - populate and map an area of a pcpu_chunk
+ * @chunk: chunk of interest
+ * @off: offset to the area to populate
+ * @size: size of the area to populate in bytes
+ *
+ * For each cpu, populate and map pages [@page_start,@page_end) into
+ * @chunk. The area is cleared on return.
+ *
+ * CONTEXT:
+ * pcpu_alloc_mutex, does GFP_KERNEL allocation.
+ */
+static int pcpu_populate_chunk(struct pcpu_chunk *chunk, int off, int size)
+{
+ const gfp_t alloc_mask = GFP_KERNEL | __GFP_HIGHMEM | __GFP_COLD;
+ int page_start = PFN_DOWN(off);
+ int page_end = PFN_UP(off + size);
+ int map_start = -1;
+ int uninitialized_var(map_end);
+ unsigned int cpu;
+ int i;
+
+ for (i = page_start; i < page_end; i++) {
+ if (pcpu_chunk_page_occupied(chunk, i)) {
+ if (map_start >= 0) {
+ if (pcpu_map(chunk, map_start, map_end))
+ goto err;
+ map_start = -1;
+ }
+ continue;
+ }
+
+ map_start = map_start < 0 ? i : map_start;
+ map_end = i + 1;
+
+ for_each_possible_cpu(cpu) {
+ struct page **pagep = pcpu_chunk_pagep(chunk, cpu, i);
+
+ *pagep = alloc_pages_node(cpu_to_node(cpu),
+ alloc_mask, 0);
+ if (!*pagep)
+ goto err;
+ }
+ }
+
+ if (map_start >= 0 && pcpu_map(chunk, map_start, map_end))
+ goto err;
+
+ for_each_possible_cpu(cpu)
+ memset(chunk->vm->addr + cpu * pcpu_unit_size + off, 0,
+ size);
+
+ return 0;
+err:
+ /* likely under heavy memory pressure, give memory back */
+ pcpu_depopulate_chunk(chunk, off, size, true);
+ return -ENOMEM;
+}
+
+static void free_pcpu_chunk(struct pcpu_chunk *chunk)
+{
+ if (!chunk)
+ return;
+ if (chunk->vm)
+ free_vm_area(chunk->vm);
+ pcpu_mem_free(chunk->map, chunk->map_alloc * sizeof(chunk->map[0]));
+ kfree(chunk);
+}
+
+static struct pcpu_chunk *alloc_pcpu_chunk(void)
+{
+ struct pcpu_chunk *chunk;
+
+ chunk = kzalloc(pcpu_chunk_struct_size, GFP_KERNEL);
+ if (!chunk)
+ return NULL;
+
+ chunk->map = pcpu_mem_alloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0]));
+ chunk->map_alloc = PCPU_DFL_MAP_ALLOC;
+ chunk->map[chunk->map_used++] = pcpu_unit_size;
+ chunk->page = chunk->page_ar;
+
+ chunk->vm = get_vm_area(pcpu_chunk_size, GFP_KERNEL);
+ if (!chunk->vm) {
+ free_pcpu_chunk(chunk);
+ return NULL;
+ }
+
+ INIT_LIST_HEAD(&chunk->list);
+ chunk->free_size = pcpu_unit_size;
+ chunk->contig_hint = pcpu_unit_size;
+
+ return chunk;
+}
+
+/**
+ * pcpu_alloc - the percpu allocator
+ * @size: size of area to allocate in bytes
+ * @align: alignment of area (max PAGE_SIZE)
+ * @reserved: allocate from the reserved chunk if available
+ *
+ * Allocate percpu area of @size bytes aligned at @align.
+ *
+ * CONTEXT:
+ * Does GFP_KERNEL allocation.
+ *
+ * RETURNS:
+ * Percpu pointer to the allocated area on success, NULL on failure.
+ */
+static void *pcpu_alloc(size_t size, size_t align, bool reserved)
+{
+ struct pcpu_chunk *chunk;
+ int slot, off;
+
+ if (unlikely(!size || size > PCPU_MIN_UNIT_SIZE || align > PAGE_SIZE)) {
+ WARN(true, "illegal size (%zu) or align (%zu) for "
+ "percpu allocation\n", size, align);
+ return NULL;
+ }
+
+ mutex_lock(&pcpu_alloc_mutex);
+ spin_lock_irq(&pcpu_lock);
+
+ /* serve reserved allocations from the reserved chunk if available */
+ if (reserved && pcpu_reserved_chunk) {
+ chunk = pcpu_reserved_chunk;
+ if (size > chunk->contig_hint ||
+ pcpu_extend_area_map(chunk) < 0)
+ goto fail_unlock;
+ off = pcpu_alloc_area(chunk, size, align);
+ if (off >= 0)
+ goto area_found;
+ goto fail_unlock;
+ }
+
+restart:
+ /* search through normal chunks */
+ for (slot = pcpu_size_to_slot(size); slot < pcpu_nr_slots; slot++) {
+ list_for_each_entry(chunk, &pcpu_slot[slot], list) {
+ if (size > chunk->contig_hint)
+ continue;
+
+ switch (pcpu_extend_area_map(chunk)) {
+ case 0:
+ break;
+ case 1:
+ goto restart; /* pcpu_lock dropped, restart */
+ default:
+ goto fail_unlock;
+ }
+
+ off = pcpu_alloc_area(chunk, size, align);
+ if (off >= 0)
+ goto area_found;
+ }
+ }
+
+ /* hmmm... no space left, create a new chunk */
+ spin_unlock_irq(&pcpu_lock);
+
+ chunk = alloc_pcpu_chunk();
+ if (!chunk)
+ goto fail_unlock_mutex;
+
+ spin_lock_irq(&pcpu_lock);
+ pcpu_chunk_relocate(chunk, -1);
+ pcpu_chunk_addr_insert(chunk);
+ goto restart;
+
+area_found:
+ spin_unlock_irq(&pcpu_lock);
+
+ /* populate, map and clear the area */
+ if (pcpu_populate_chunk(chunk, off, size)) {
+ spin_lock_irq(&pcpu_lock);
+ pcpu_free_area(chunk, off);
+ goto fail_unlock;
+ }
+
+ mutex_unlock(&pcpu_alloc_mutex);
+
+ return __addr_to_pcpu_ptr(chunk->vm->addr + off);
+
+fail_unlock:
+ spin_unlock_irq(&pcpu_lock);
+fail_unlock_mutex:
+ mutex_unlock(&pcpu_alloc_mutex);
+ return NULL;
+}
+
+/**
+ * __alloc_percpu - allocate dynamic percpu area
+ * @size: size of area to allocate in bytes
+ * @align: alignment of area (max PAGE_SIZE)
+ *
+ * Allocate percpu area of @size bytes aligned at @align. Might
+ * sleep. Might trigger writeouts.
+ *
+ * CONTEXT:
+ * Does GFP_KERNEL allocation.
+ *
+ * RETURNS:
+ * Percpu pointer to the allocated area on success, NULL on failure.
+ */
+void *__alloc_percpu(size_t size, size_t align)
+{
+ return pcpu_alloc(size, align, false);
+}
+EXPORT_SYMBOL_GPL(__alloc_percpu);
+
+/**
+ * __alloc_reserved_percpu - allocate reserved percpu area
+ * @size: size of area to allocate in bytes
+ * @align: alignment of area (max PAGE_SIZE)
+ *
+ * Allocate percpu area of @size bytes aligned at @align from reserved
+ * percpu area if arch has set it up; otherwise, allocation is served
+ * from the same dynamic area. Might sleep. Might trigger writeouts.
+ *
+ * CONTEXT:
+ * Does GFP_KERNEL allocation.
+ *
+ * RETURNS:
+ * Percpu pointer to the allocated area on success, NULL on failure.
+ */
+void *__alloc_reserved_percpu(size_t size, size_t align)
+{
+ return pcpu_alloc(size, align, true);
+}
+
+/**
+ * pcpu_reclaim - reclaim fully free chunks, workqueue function
+ * @work: unused
+ *
+ * Reclaim all fully free chunks except for the first one.
+ *
+ * CONTEXT:
+ * workqueue context.
+ */
+static void pcpu_reclaim(struct work_struct *work)
+{
+ LIST_HEAD(todo);
+ struct list_head *head = &pcpu_slot[pcpu_nr_slots - 1];
+ struct pcpu_chunk *chunk, *next;
+
+ mutex_lock(&pcpu_alloc_mutex);
+ spin_lock_irq(&pcpu_lock);
+
+ list_for_each_entry_safe(chunk, next, head, list) {
+ WARN_ON(chunk->immutable);
+
+ /* spare the first one */
+ if (chunk == list_first_entry(head, struct pcpu_chunk, list))
+ continue;
+
+ rb_erase(&chunk->rb_node, &pcpu_addr_root);
+ list_move(&chunk->list, &todo);
+ }
+
+ spin_unlock_irq(&pcpu_lock);
+ mutex_unlock(&pcpu_alloc_mutex);
+
+ list_for_each_entry_safe(chunk, next, &todo, list) {
+ pcpu_depopulate_chunk(chunk, 0, pcpu_unit_size, false);
+ free_pcpu_chunk(chunk);
+ }
+}
+
+/**
+ * free_percpu - free percpu area
+ * @ptr: pointer to area to free
+ *
+ * Free percpu area @ptr.
+ *
+ * CONTEXT:
+ * Can be called from atomic context.
+ */
+void free_percpu(void *ptr)
+{
+ void *addr = __pcpu_ptr_to_addr(ptr);
+ struct pcpu_chunk *chunk;
+ unsigned long flags;
+ int off;
+
+ if (!ptr)
+ return;
+
+ spin_lock_irqsave(&pcpu_lock, flags);
+
+ chunk = pcpu_chunk_addr_search(addr);
+ off = addr - chunk->vm->addr;
+
+ pcpu_free_area(chunk, off);
+
+ /* if there are more than one fully free chunks, wake up grim reaper */
+ if (chunk->free_size == pcpu_unit_size) {
+ struct pcpu_chunk *pos;
+
+ list_for_each_entry(pos, &pcpu_slot[pcpu_nr_slots - 1], list)
+ if (pos != chunk) {
+ schedule_work(&pcpu_reclaim_work);
+ break;
+ }
+ }
+
+ spin_unlock_irqrestore(&pcpu_lock, flags);
+}
+EXPORT_SYMBOL_GPL(free_percpu);
+
+/**
+ * pcpu_setup_first_chunk - initialize the first percpu chunk
+ * @get_page_fn: callback to fetch page pointer
+ * @static_size: the size of static percpu area in bytes
+ * @reserved_size: the size of reserved percpu area in bytes
+ * @unit_size: unit size in bytes, must be multiple of PAGE_SIZE, -1 for auto
+ * @dyn_size: free size for dynamic allocation in bytes, -1 for auto
+ * @base_addr: mapped address, NULL for auto
+ * @populate_pte_fn: callback to allocate pagetable, NULL if unnecessary
+ *
+ * Initialize the first percpu chunk which contains the kernel static
+ * perpcu area. This function is to be called from arch percpu area
+ * setup path. The first two parameters are mandatory. The rest are
+ * optional.
+ *
+ * @get_page_fn() should return pointer to percpu page given cpu
+ * number and page number. It should at least return enough pages to
+ * cover the static area. The returned pages for static area should
+ * have been initialized with valid data. If @unit_size is specified,
+ * it can also return pages after the static area. NULL return
+ * indicates end of pages for the cpu. Note that @get_page_fn() must
+ * return the same number of pages for all cpus.
+ *
+ * @reserved_size, if non-zero, specifies the amount of bytes to
+ * reserve after the static area in the first chunk. This reserves
+ * the first chunk such that it's available only through reserved
+ * percpu allocation. This is primarily used to serve module percpu
+ * static areas on architectures where the addressing model has
+ * limited offset range for symbol relocations to guarantee module
+ * percpu symbols fall inside the relocatable range.
+ *
+ * @unit_size, if non-negative, specifies unit size and must be
+ * aligned to PAGE_SIZE and equal to or larger than @static_size +
+ * @reserved_size + @dyn_size.
+ *
+ * @dyn_size, if non-negative, limits the number of bytes available
+ * for dynamic allocation in the first chunk. Specifying non-negative
+ * value make percpu leave alone the area beyond @static_size +
+ * @reserved_size + @dyn_size.
+ *
+ * Non-null @base_addr means that the caller already allocated virtual
+ * region for the first chunk and mapped it. percpu must not mess
+ * with the chunk. Note that @base_addr with 0 @unit_size or non-NULL
+ * @populate_pte_fn doesn't make any sense.
+ *
+ * @populate_pte_fn is used to populate the pagetable. NULL means the
+ * caller already populated the pagetable.
+ *
+ * If the first chunk ends up with both reserved and dynamic areas, it
+ * is served by two chunks - one to serve the core static and reserved
+ * areas and the other for the dynamic area. They share the same vm
+ * and page map but uses different area allocation map to stay away
+ * from each other. The latter chunk is circulated in the chunk slots
+ * and available for dynamic allocation like any other chunks.
+ *
+ * RETURNS:
+ * The determined pcpu_unit_size which can be used to initialize
+ * percpu access.
+ */
+size_t __init pcpu_setup_first_chunk(pcpu_get_page_fn_t get_page_fn,
+ size_t static_size, size_t reserved_size,
+ ssize_t unit_size, ssize_t dyn_size,
+ void *base_addr,
+ pcpu_populate_pte_fn_t populate_pte_fn)
+{
+ static struct vm_struct first_vm;
+ static int smap[2], dmap[2];
+ struct pcpu_chunk *schunk, *dchunk = NULL;
+ unsigned int cpu;
+ int nr_pages;
+ int err, i;
+
+ /* santiy checks */
+ BUILD_BUG_ON(ARRAY_SIZE(smap) >= PCPU_DFL_MAP_ALLOC ||
+ ARRAY_SIZE(dmap) >= PCPU_DFL_MAP_ALLOC);
+ BUG_ON(!static_size);
+ if (unit_size >= 0) {
+ BUG_ON(unit_size < static_size + reserved_size +
+ (dyn_size >= 0 ? dyn_size : 0));
+ BUG_ON(unit_size & ~PAGE_MASK);
+ } else {
+ BUG_ON(dyn_size >= 0);
+ BUG_ON(base_addr);
+ }
+ BUG_ON(base_addr && populate_pte_fn);
+
+ if (unit_size >= 0)
+ pcpu_unit_pages = unit_size >> PAGE_SHIFT;
+ else
+ pcpu_unit_pages = max_t(int, PCPU_MIN_UNIT_SIZE >> PAGE_SHIFT,
+ PFN_UP(static_size + reserved_size));
+
+ pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
+ pcpu_chunk_size = num_possible_cpus() * pcpu_unit_size;
+ pcpu_chunk_struct_size = sizeof(struct pcpu_chunk)
+ + num_possible_cpus() * pcpu_unit_pages * sizeof(struct page *);
+
+ if (dyn_size < 0)
+ dyn_size = pcpu_unit_size - static_size - reserved_size;
+
+ /*
+ * Allocate chunk slots. The additional last slot is for
+ * empty chunks.
+ */
+ pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2;
+ pcpu_slot = alloc_bootmem(pcpu_nr_slots * sizeof(pcpu_slot[0]));
+ for (i = 0; i < pcpu_nr_slots; i++)
+ INIT_LIST_HEAD(&pcpu_slot[i]);
+
+ /*
+ * Initialize static chunk. If reserved_size is zero, the
+ * static chunk covers static area + dynamic allocation area
+ * in the first chunk. If reserved_size is not zero, it
+ * covers static area + reserved area (mostly used for module
+ * static percpu allocation).
+ */
+ schunk = alloc_bootmem(pcpu_chunk_struct_size);
+ INIT_LIST_HEAD(&schunk->list);
+ schunk->vm = &first_vm;
+ schunk->map = smap;
+ schunk->map_alloc = ARRAY_SIZE(smap);
+ schunk->page = schunk->page_ar;
+
+ if (reserved_size) {
+ schunk->free_size = reserved_size;
+ pcpu_reserved_chunk = schunk; /* not for dynamic alloc */
+ } else {
+ schunk->free_size = dyn_size;
+ dyn_size = 0; /* dynamic area covered */
+ }
+ schunk->contig_hint = schunk->free_size;
+
+ schunk->map[schunk->map_used++] = -static_size;
+ if (schunk->free_size)
+ schunk->map[schunk->map_used++] = schunk->free_size;
+
+ pcpu_reserved_chunk_limit = static_size + schunk->free_size;
+
+ /* init dynamic chunk if necessary */
+ if (dyn_size) {
+ dchunk = alloc_bootmem(sizeof(struct pcpu_chunk));
+ INIT_LIST_HEAD(&dchunk->list);
+ dchunk->vm = &first_vm;
+ dchunk->map = dmap;
+ dchunk->map_alloc = ARRAY_SIZE(dmap);
+ dchunk->page = schunk->page_ar; /* share page map with schunk */
+
+ dchunk->contig_hint = dchunk->free_size = dyn_size;
+ dchunk->map[dchunk->map_used++] = -pcpu_reserved_chunk_limit;
+ dchunk->map[dchunk->map_used++] = dchunk->free_size;
+ }
+
+ /* allocate vm address */
+ first_vm.flags = VM_ALLOC;
+ first_vm.size = pcpu_chunk_size;
+
+ if (!base_addr)
+ vm_area_register_early(&first_vm, PAGE_SIZE);
+ else {
+ /*
+ * Pages already mapped. No need to remap into
+ * vmalloc area. In this case the first chunks can't
+ * be mapped or unmapped by percpu and are marked
+ * immutable.
+ */
+ first_vm.addr = base_addr;
+ schunk->immutable = true;
+ if (dchunk)
+ dchunk->immutable = true;
+ }
+
+ /* assign pages */
+ nr_pages = -1;
+ for_each_possible_cpu(cpu) {
+ for (i = 0; i < pcpu_unit_pages; i++) {
+ struct page *page = get_page_fn(cpu, i);
+
+ if (!page)
+ break;
+ *pcpu_chunk_pagep(schunk, cpu, i) = page;
+ }
+
+ BUG_ON(i < PFN_UP(static_size));
+
+ if (nr_pages < 0)
+ nr_pages = i;
+ else
+ BUG_ON(nr_pages != i);
+ }
+
+ /* map them */
+ if (populate_pte_fn) {
+ for_each_possible_cpu(cpu)
+ for (i = 0; i < nr_pages; i++)
+ populate_pte_fn(pcpu_chunk_addr(schunk,
+ cpu, i));
+
+ err = pcpu_map(schunk, 0, nr_pages);
+ if (err)
+ panic("failed to setup static percpu area, err=%d\n",
+ err);
+ }
+
+ /* link the first chunk in */
+ if (!dchunk) {
+ pcpu_chunk_relocate(schunk, -1);
+ pcpu_chunk_addr_insert(schunk);
+ } else {
+ pcpu_chunk_relocate(dchunk, -1);
+ pcpu_chunk_addr_insert(dchunk);
+ }
+
+ /* we're done */
+ pcpu_base_addr = (void *)pcpu_chunk_addr(schunk, 0, 0);
+ return pcpu_unit_size;
+}
#include <linux/radix-tree.h>
#include <linux/rcupdate.h>
#include <linux/bootmem.h>
+#include <linux/pfn.h>
#include <asm/atomic.h>
#include <asm/uaccess.h>
*
* Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N]
*/
-static int vmap_page_range(unsigned long start, unsigned long end,
- pgprot_t prot, struct page **pages)
+static int vmap_page_range_noflush(unsigned long start, unsigned long end,
+ pgprot_t prot, struct page **pages)
{
pgd_t *pgd;
unsigned long next;
if (err)
break;
} while (pgd++, addr = next, addr != end);
- flush_cache_vmap(start, end);
if (unlikely(err))
return err;
return nr;
}
+static int vmap_page_range(unsigned long start, unsigned long end,
+ pgprot_t prot, struct page **pages)
+{
+ int ret;
+
+ ret = vmap_page_range_noflush(start, end, prot, pages);
+ flush_cache_vmap(start, end);
+ return ret;
+}
+
static inline int is_vmalloc_or_module_addr(const void *x)
{
/*
}
EXPORT_SYMBOL(vm_map_ram);
+/**
+ * vm_area_register_early - register vmap area early during boot
+ * @vm: vm_struct to register
+ * @align: requested alignment
+ *
+ * This function is used to register kernel vm area before
+ * vmalloc_init() is called. @vm->size and @vm->flags should contain
+ * proper values on entry and other fields should be zero. On return,
+ * vm->addr contains the allocated address.
+ *
+ * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING.
+ */
+void __init vm_area_register_early(struct vm_struct *vm, size_t align)
+{
+ static size_t vm_init_off __initdata;
+ unsigned long addr;
+
+ addr = ALIGN(VMALLOC_START + vm_init_off, align);
+ vm_init_off = PFN_ALIGN(addr + vm->size) - VMALLOC_START;
+
+ vm->addr = (void *)addr;
+
+ vm->next = vmlist;
+ vmlist = vm;
+}
+
void __init vmalloc_init(void)
{
struct vmap_area *va;
vmap_initialized = true;
}
+/**
+ * map_kernel_range_noflush - map kernel VM area with the specified pages
+ * @addr: start of the VM area to map
+ * @size: size of the VM area to map
+ * @prot: page protection flags to use
+ * @pages: pages to map
+ *
+ * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size
+ * specify should have been allocated using get_vm_area() and its
+ * friends.
+ *
+ * NOTE:
+ * This function does NOT do any cache flushing. The caller is
+ * responsible for calling flush_cache_vmap() on to-be-mapped areas
+ * before calling this function.
+ *
+ * RETURNS:
+ * The number of pages mapped on success, -errno on failure.
+ */
+int map_kernel_range_noflush(unsigned long addr, unsigned long size,
+ pgprot_t prot, struct page **pages)
+{
+ return vmap_page_range_noflush(addr, addr + size, prot, pages);
+}
+
+/**
+ * unmap_kernel_range_noflush - unmap kernel VM area
+ * @addr: start of the VM area to unmap
+ * @size: size of the VM area to unmap
+ *
+ * Unmap PFN_UP(@size) pages at @addr. The VM area @addr and @size
+ * specify should have been allocated using get_vm_area() and its
+ * friends.
+ *
+ * NOTE:
+ * This function does NOT do any cache flushing. The caller is
+ * responsible for calling flush_cache_vunmap() on to-be-mapped areas
+ * before calling this function and flush_tlb_kernel_range() after.
+ */
+void unmap_kernel_range_noflush(unsigned long addr, unsigned long size)
+{
+ vunmap_page_range(addr, addr + size);
+}
+
+/**
+ * unmap_kernel_range - unmap kernel VM area and flush cache and TLB
+ * @addr: start of the VM area to unmap
+ * @size: size of the VM area to unmap
+ *
+ * Similar to unmap_kernel_range_noflush() but flushes vcache before
+ * the unmapping and tlb after.
+ */
void unmap_kernel_range(unsigned long addr, unsigned long size)
{
unsigned long end = addr + size;
EXPORT_SYMBOL(tr_type_trans);
EXPORT_SYMBOL(alloc_trdev);
+
+MODULE_LICENSE("GPL");
#include <linux/skbuff.h>
#include <linux/netdevice.h>
#include <linux/if_vlan.h>
+#include <linux/netpoll.h>
#include "vlan.h"
/* VLAN rx hw acceleration helper. This acts like netif_{rx,receive_skb}(). */
int __vlan_hwaccel_rx(struct sk_buff *skb, struct vlan_group *grp,
u16 vlan_tci, int polling)
{
+ if (netpoll_rx(skb))
+ return NET_RX_DROP;
+
if (skb_bond_should_drop(skb))
goto drop;
{
int err = NET_RX_SUCCESS;
+ if (netpoll_receive_skb(skb))
+ return NET_RX_DROP;
+
switch (vlan_gro_common(napi, grp, vlan_tci, skb)) {
case -1:
return netif_receive_skb(skb);
if (!skb)
goto out;
+ if (netpoll_receive_skb(skb))
+ goto out;
+
err = NET_RX_SUCCESS;
switch (vlan_gro_common(napi, grp, vlan_tci, skb)) {
int err = 0;
if (netif_device_present(real_dev) && ops->ndo_neigh_setup)
- err = ops->ndo_neigh_setup(dev, pa);
+ err = ops->ndo_neigh_setup(real_dev, pa);
return err;
}
dev->hard_header_len = real_dev->hard_header_len + VLAN_HLEN;
dev->netdev_ops = &vlan_netdev_ops;
}
+ netdev_resync_ops(dev);
if (is_vlan_dev(real_dev))
subclass = 1;
rcu_read_lock();
- /* Don't receive packets in an exiting network namespace */
- if (!net_alive(dev_net(skb->dev))) {
- kfree_skb(skb);
- goto out;
- }
-
#ifdef CONFIG_NET_CLS_ACT
if (skb->tc_verd & TC_NCLS) {
skb->tc_verd = CLR_TC_NCLS(skb->tc_verd);
int napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
{
+ if (netpoll_receive_skb(skb))
+ return NET_RX_DROP;
+
switch (__napi_gro_receive(napi, skb)) {
case -1:
return netif_receive_skb(skb);
if (!skb)
goto out;
+ if (netpoll_receive_skb(skb))
+ goto out;
+
err = NET_RX_SUCCESS;
switch (__napi_gro_receive(napi, skb)) {
}
EXPORT_SYMBOL(netdev_fix_features);
+/* Some devices need to (re-)set their netdev_ops inside
+ * ->init() or similar. If that happens, we have to setup
+ * the compat pointers again.
+ */
+void netdev_resync_ops(struct net_device *dev)
+{
+#ifdef CONFIG_COMPAT_NET_DEV_OPS
+ const struct net_device_ops *ops = dev->netdev_ops;
+
+ dev->init = ops->ndo_init;
+ dev->uninit = ops->ndo_uninit;
+ dev->open = ops->ndo_open;
+ dev->change_rx_flags = ops->ndo_change_rx_flags;
+ dev->set_rx_mode = ops->ndo_set_rx_mode;
+ dev->set_multicast_list = ops->ndo_set_multicast_list;
+ dev->set_mac_address = ops->ndo_set_mac_address;
+ dev->validate_addr = ops->ndo_validate_addr;
+ dev->do_ioctl = ops->ndo_do_ioctl;
+ dev->set_config = ops->ndo_set_config;
+ dev->change_mtu = ops->ndo_change_mtu;
+ dev->neigh_setup = ops->ndo_neigh_setup;
+ dev->tx_timeout = ops->ndo_tx_timeout;
+ dev->get_stats = ops->ndo_get_stats;
+ dev->vlan_rx_register = ops->ndo_vlan_rx_register;
+ dev->vlan_rx_add_vid = ops->ndo_vlan_rx_add_vid;
+ dev->vlan_rx_kill_vid = ops->ndo_vlan_rx_kill_vid;
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ dev->poll_controller = ops->ndo_poll_controller;
+#endif
+#endif
+}
+EXPORT_SYMBOL(netdev_resync_ops);
+
/**
* register_netdevice - register a network device
* @dev: device to register
* This is temporary until all network devices are converted.
*/
if (dev->netdev_ops) {
- const struct net_device_ops *ops = dev->netdev_ops;
-
- dev->init = ops->ndo_init;
- dev->uninit = ops->ndo_uninit;
- dev->open = ops->ndo_open;
- dev->change_rx_flags = ops->ndo_change_rx_flags;
- dev->set_rx_mode = ops->ndo_set_rx_mode;
- dev->set_multicast_list = ops->ndo_set_multicast_list;
- dev->set_mac_address = ops->ndo_set_mac_address;
- dev->validate_addr = ops->ndo_validate_addr;
- dev->do_ioctl = ops->ndo_do_ioctl;
- dev->set_config = ops->ndo_set_config;
- dev->change_mtu = ops->ndo_change_mtu;
- dev->tx_timeout = ops->ndo_tx_timeout;
- dev->get_stats = ops->ndo_get_stats;
- dev->vlan_rx_register = ops->ndo_vlan_rx_register;
- dev->vlan_rx_add_vid = ops->ndo_vlan_rx_add_vid;
- dev->vlan_rx_kill_vid = ops->ndo_vlan_rx_kill_vid;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = ops->ndo_poll_controller;
-#endif
+ netdev_resync_ops(dev);
} else {
char drivername[64];
pr_info("%s (%s): not using net_device_ops yet\n",
if (endp == buf)
goto err;
- rtnl_lock();
+ if (!rtnl_trylock())
+ return -ERESTARTSYS;
+
if (dev_isalive(net)) {
if ((ret = (*set)(net, new)) == 0)
ret = len;
struct pernet_operations *ops;
struct net *net;
- /* Be very certain incoming network packets will not find us */
- rcu_barrier();
-
net = container_of(work, struct net, work);
mutex_lock(&net_mutex);
int snmp_mib_init(void *ptr[2], size_t mibsize)
{
BUG_ON(ptr == NULL);
- ptr[0] = __alloc_percpu(mibsize);
+ ptr[0] = __alloc_percpu(mibsize, __alignof__(unsigned long long));
if (!ptr[0])
goto err0;
- ptr[1] = __alloc_percpu(mibsize);
+ ptr[1] = __alloc_percpu(mibsize, __alignof__(unsigned long long));
if (!ptr[1])
goto err1;
return 0;
int __init icmp_init(void)
{
- return register_pernet_device(&icmp_sk_ops);
+ return register_pernet_subsys(&icmp_sk_ops);
}
EXPORT_SYMBOL(icmp_err_convert);
int rc = 0;
#ifdef CONFIG_NET_CLS_ROUTE
- ip_rt_acct = __alloc_percpu(256 * sizeof(struct ip_rt_acct));
+ ip_rt_acct = __alloc_percpu(256 * sizeof(struct ip_rt_acct), __alignof__(struct ip_rt_acct));
if (!ip_rt_acct)
panic("IP: failed to allocate ip_rt_acct\n");
#endif
static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb,
struct tcp_sacktag_state *state,
- unsigned int pcount, int shifted, int mss)
+ unsigned int pcount, int shifted, int mss,
+ int dup_sack)
{
struct tcp_sock *tp = tcp_sk(sk);
struct sk_buff *prev = tcp_write_queue_prev(sk, skb);
}
/* We discard results */
- tcp_sacktag_one(skb, sk, state, 0, pcount);
+ tcp_sacktag_one(skb, sk, state, dup_sack, pcount);
/* Difference in this won't matter, both ACKed by the same cumul. ACK */
TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS);
if (!skb_shift(prev, skb, len))
goto fallback;
- if (!tcp_shifted_skb(sk, skb, state, pcount, len, mss))
+ if (!tcp_shifted_skb(sk, skb, state, pcount, len, mss, dup_sack))
goto out;
/* Hole filled allows collapsing with the next as well, this is very
len = skb->len;
if (skb_shift(prev, skb, len)) {
pcount += tcp_skb_pcount(skb);
- tcp_shifted_skb(sk, skb, state, tcp_skb_pcount(skb), len, mss);
+ tcp_shifted_skb(sk, skb, state, tcp_skb_pcount(skb), len, mss, 0);
}
out:
void __init tcp_v4_init(void)
{
inet_hashinfo_init(&tcp_hashinfo);
- if (register_pernet_device(&tcp_sk_ops))
+ if (register_pernet_subsys(&tcp_sk_ops))
panic("Failed to create the TCP control socket.\n");
}
/* Tom Kelly's Scalable TCP
*
- * See htt://www-lce.eng.cam.ac.uk/~ctk21/scalable/
+ * See http://www.deneholme.net/tom/scalable/
*
* John Heffner <jheffner@sc.edu>
*/
read_unlock(&dev_base_lock);
}
-static void addrconf_fixup_forwarding(struct ctl_table *table, int *p, int old)
+static int addrconf_fixup_forwarding(struct ctl_table *table, int *p, int old)
{
struct net *net;
net = (struct net *)table->extra2;
if (p == &net->ipv6.devconf_dflt->forwarding)
- return;
+ return 0;
+
+ if (!rtnl_trylock())
+ return -ERESTARTSYS;
- rtnl_lock();
if (p == &net->ipv6.devconf_all->forwarding) {
__s32 newf = net->ipv6.devconf_all->forwarding;
net->ipv6.devconf_dflt->forwarding = newf;
if (*p)
rt6_purge_dflt_routers(net);
+ return 1;
}
#endif
ASSERT_RTNL();
- if ((dev->flags & IFF_LOOPBACK) && how == 1)
- how = 0;
-
rt6_ifdown(net, dev);
neigh_ifdown(&nd_tbl, dev);
ret = proc_dointvec(ctl, write, filp, buffer, lenp, ppos);
if (write)
- addrconf_fixup_forwarding(ctl, valp, val);
+ ret = addrconf_fixup_forwarding(ctl, valp, val);
return ret;
}
}
*valp = new;
- addrconf_fixup_forwarding(table, valp, val);
- return 1;
+ return addrconf_fixup_forwarding(table, valp, val);
}
static struct addrconf_sysctl_table
EXPORT_SYMBOL(unregister_inet6addr_notifier);
-static void addrconf_net_exit(struct net *net)
-{
- struct net_device *dev;
-
- rtnl_lock();
- /* clean dev list */
- for_each_netdev(net, dev) {
- if (__in6_dev_get(dev) == NULL)
- continue;
- addrconf_ifdown(dev, 1);
- }
- addrconf_ifdown(net->loopback_dev, 2);
- rtnl_unlock();
-}
-
-static struct pernet_operations addrconf_net_ops = {
- .exit = addrconf_net_exit,
-};
-
/*
* Init / cleanup code
*/
if (err)
goto errlo;
- err = register_pernet_device(&addrconf_net_ops);
- if (err)
- return err;
-
register_netdevice_notifier(&ipv6_dev_notf);
addrconf_verify(0);
void addrconf_cleanup(void)
{
struct inet6_ifaddr *ifa;
+ struct net_device *dev;
int i;
unregister_netdevice_notifier(&ipv6_dev_notf);
- unregister_pernet_device(&addrconf_net_ops);
-
unregister_pernet_subsys(&addrconf_ops);
rtnl_lock();
+ /* clean dev list */
+ for_each_netdev(&init_net, dev) {
+ if (__in6_dev_get(dev) == NULL)
+ continue;
+ addrconf_ifdown(dev, 1);
+ }
+ addrconf_ifdown(init_net.loopback_dev, 2);
+
/*
* Check hash table.
*/
del_timer(&addr_chk_timer);
rtnl_unlock();
-
- unregister_pernet_subsys(&addrconf_net_ops);
}
static struct list_head inetsw6[SOCK_MAX];
static DEFINE_SPINLOCK(inetsw6_lock);
+static int disable_ipv6 = 0;
+module_param_named(disable, disable_ipv6, int, 0);
+MODULE_PARM_DESC(disable, "Disable IPv6 such that it is non-functional");
+
static __inline__ struct ipv6_pinfo *inet6_sk_generic(struct sock *sk)
{
const int offset = sk->sk_prot->obj_size - sizeof(struct ipv6_pinfo);
{
struct sk_buff *dummy_skb;
struct list_head *r;
- int err;
+ int err = 0;
BUILD_BUG_ON(sizeof(struct inet6_skb_parm) > sizeof(dummy_skb->cb));
+ /* Register the socket-side information for inet6_create. */
+ for(r = &inetsw6[0]; r < &inetsw6[SOCK_MAX]; ++r)
+ INIT_LIST_HEAD(r);
+
+ if (disable_ipv6) {
+ printk(KERN_INFO
+ "IPv6: Loaded, but administratively disabled, "
+ "reboot required to enable\n");
+ goto out;
+ }
+
err = proto_register(&tcpv6_prot, 1);
if (err)
goto out;
goto out_unregister_udplite_proto;
- /* Register the socket-side information for inet6_create. */
- for(r = &inetsw6[0]; r < &inetsw6[SOCK_MAX]; ++r)
- INIT_LIST_HEAD(r);
-
/* We MUST register RAW sockets before we create the ICMP6,
* IGMP6, or NDISC control sockets.
*/
if (twp != NULL) {
*twp = tw;
- NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
} else if (tw != NULL) {
/* Silly. Should hash-dance instead... */
inet_twsk_deschedule(tw, death_row);
- NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
inet_twsk_put(tw);
}
if (net->ct.sysctl_checksum && hooknum == NF_INET_PRE_ROUTING &&
nf_ip6_checksum(skb, hooknum, dataoff, IPPROTO_ICMPV6)) {
- nf_log_packet(PF_INET6, 0, skb, NULL, NULL, NULL,
- "nf_ct_icmpv6: ICMPv6 checksum failed\n");
+ if (LOG_INVALID(net, IPPROTO_ICMPV6))
+ nf_log_packet(PF_INET6, 0, skb, NULL, NULL, NULL,
+ "nf_ct_icmpv6: ICMPv6 checksum failed ");
return -NF_ACCEPT;
}
#endif
#define NFULNL_NLBUFSIZ_DEFAULT NLMSG_GOODSIZE
-#define NFULNL_TIMEOUT_DEFAULT HZ /* every second */
+#define NFULNL_TIMEOUT_DEFAULT 100 /* every second */
#define NFULNL_QTHRESH_DEFAULT 100 /* 100 packets */
#define NFULNL_COPY_RANGE_MAX 0xFFFF /* max packet size is limited by 16-bit struct nfattr nfa_len field */
qthreshold = inst->qthreshold;
/* per-rule qthreshold overrides per-instance */
- if (qthreshold > li->u.ulog.qthreshold)
- qthreshold = li->u.ulog.qthreshold;
+ if (li->u.ulog.qthreshold)
+ if (qthreshold > li->u.ulog.qthreshold)
+ qthreshold = li->u.ulog.qthreshold;
+
switch (inst->copy_mode) {
case NFULNL_COPY_META:
.release = seq_release_net,
};
-static void *xt_match_seq_start(struct seq_file *seq, loff_t *pos)
+/*
+ * Traverse state for ip{,6}_{tables,matches} for helping crossing
+ * the multi-AF mutexes.
+ */
+struct nf_mttg_trav {
+ struct list_head *head, *curr;
+ uint8_t class, nfproto;
+};
+
+enum {
+ MTTG_TRAV_INIT,
+ MTTG_TRAV_NFP_UNSPEC,
+ MTTG_TRAV_NFP_SPEC,
+ MTTG_TRAV_DONE,
+};
+
+static void *xt_mttg_seq_next(struct seq_file *seq, void *v, loff_t *ppos,
+ bool is_target)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
+ static const uint8_t next_class[] = {
+ [MTTG_TRAV_NFP_UNSPEC] = MTTG_TRAV_NFP_SPEC,
+ [MTTG_TRAV_NFP_SPEC] = MTTG_TRAV_DONE,
+ };
+ struct nf_mttg_trav *trav = seq->private;
+
+ switch (trav->class) {
+ case MTTG_TRAV_INIT:
+ trav->class = MTTG_TRAV_NFP_UNSPEC;
+ mutex_lock(&xt[NFPROTO_UNSPEC].mutex);
+ trav->head = trav->curr = is_target ?
+ &xt[NFPROTO_UNSPEC].target : &xt[NFPROTO_UNSPEC].match;
+ break;
+ case MTTG_TRAV_NFP_UNSPEC:
+ trav->curr = trav->curr->next;
+ if (trav->curr != trav->head)
+ break;
+ mutex_unlock(&xt[NFPROTO_UNSPEC].mutex);
+ mutex_lock(&xt[trav->nfproto].mutex);
+ trav->head = trav->curr = is_target ?
+ &xt[trav->nfproto].target : &xt[trav->nfproto].match;
+ trav->class = next_class[trav->class];
+ break;
+ case MTTG_TRAV_NFP_SPEC:
+ trav->curr = trav->curr->next;
+ if (trav->curr != trav->head)
+ break;
+ /* fallthru, _stop will unlock */
+ default:
+ return NULL;
+ }
- mutex_lock(&xt[af].mutex);
- return seq_list_start(&xt[af].match, *pos);
+ if (ppos != NULL)
+ ++*ppos;
+ return trav;
}
-static void *xt_match_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+static void *xt_mttg_seq_start(struct seq_file *seq, loff_t *pos,
+ bool is_target)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
+ struct nf_mttg_trav *trav = seq->private;
+ unsigned int j;
- return seq_list_next(v, &xt[af].match, pos);
+ trav->class = MTTG_TRAV_INIT;
+ for (j = 0; j < *pos; ++j)
+ if (xt_mttg_seq_next(seq, NULL, NULL, is_target) == NULL)
+ return NULL;
+ return trav;
}
-static void xt_match_seq_stop(struct seq_file *seq, void *v)
+static void xt_mttg_seq_stop(struct seq_file *seq, void *v)
{
- struct proc_dir_entry *pde = seq->private;
- u_int16_t af = (unsigned long)pde->data;
+ struct nf_mttg_trav *trav = seq->private;
+
+ switch (trav->class) {
+ case MTTG_TRAV_NFP_UNSPEC:
+ mutex_unlock(&xt[NFPROTO_UNSPEC].mutex);
+ break;
+ case MTTG_TRAV_NFP_SPEC:
+ mutex_unlock(&xt[trav->nfproto].mutex);
+ break;
+ }
+}
- mutex_unlock(&xt[af].mutex);
+static void *xt_match_seq_start(struct seq_file *seq, loff_t *pos)
+{
+ return xt_mttg_seq_start(seq, pos, false);
}
-static int xt_match_seq_show(struct seq_file *seq, void *v)
+static void *xt_match_seq_next(struct seq_file *seq, void *v, loff_t *ppos)
{
- struct xt_match *match = list_entry(v, struct xt_match, list);
+ return xt_mttg_seq_next(seq, v, ppos, false);
+}
- if (strlen(match->name))
- return seq_printf(seq, "%s\n", match->name);
- else
- return 0;
+static int xt_match_seq_show(struct seq_file *seq, void *v)
+{
+ const struct nf_mttg_trav *trav = seq->private;
+ const struct xt_match *match;
+
+ switch (trav->class) {
+ case MTTG_TRAV_NFP_UNSPEC:
+ case MTTG_TRAV_NFP_SPEC:
+ if (trav->curr == trav->head)
+ return 0;
+ match = list_entry(trav->curr, struct xt_match, list);
+ return (*match->name == '\0') ? 0 :
+ seq_printf(seq, "%s\n", match->name);
+ }
+ return 0;
}
static const struct seq_operations xt_match_seq_ops = {
.start = xt_match_seq_start,
.next = xt_match_seq_next,
- .stop = xt_match_seq_stop,
+ .stop = xt_mttg_seq_stop,
.show = xt_match_seq_show,
};
static int xt_match_open(struct inode *inode, struct file *file)
{
+ struct seq_file *seq;
+ struct nf_mttg_trav *trav;
int ret;
- ret = seq_open(file, &xt_match_seq_ops);
- if (!ret) {
- struct seq_file *seq = file->private_data;
+ trav = kmalloc(sizeof(*trav), GFP_KERNEL);
+ if (trav == NULL)
+ return -ENOMEM;
- seq->private = PDE(inode);
+ ret = seq_open(file, &xt_match_seq_ops);
+ if (ret < 0) {
+ kfree(trav);
+ return ret;
}
- return ret;
+
+ seq = file->private_data;
+ seq->private = trav;
+ trav->nfproto = (unsigned long)PDE(inode)->data;
+ return 0;
}
static const struct file_operations xt_match_ops = {
.open = xt_match_open,
.read = seq_read,
.llseek = seq_lseek,
- .release = seq_release,
+ .release = seq_release_private,
};
static void *xt_target_seq_start(struct seq_file *seq, loff_t *pos)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
-
- mutex_lock(&xt[af].mutex);
- return seq_list_start(&xt[af].target, *pos);
+ return xt_mttg_seq_start(seq, pos, true);
}
-static void *xt_target_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+static void *xt_target_seq_next(struct seq_file *seq, void *v, loff_t *ppos)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
-
- return seq_list_next(v, &xt[af].target, pos);
-}
-
-static void xt_target_seq_stop(struct seq_file *seq, void *v)
-{
- struct proc_dir_entry *pde = seq->private;
- u_int16_t af = (unsigned long)pde->data;
-
- mutex_unlock(&xt[af].mutex);
+ return xt_mttg_seq_next(seq, v, ppos, true);
}
static int xt_target_seq_show(struct seq_file *seq, void *v)
{
- struct xt_target *target = list_entry(v, struct xt_target, list);
-
- if (strlen(target->name))
- return seq_printf(seq, "%s\n", target->name);
- else
- return 0;
+ const struct nf_mttg_trav *trav = seq->private;
+ const struct xt_target *target;
+
+ switch (trav->class) {
+ case MTTG_TRAV_NFP_UNSPEC:
+ case MTTG_TRAV_NFP_SPEC:
+ if (trav->curr == trav->head)
+ return 0;
+ target = list_entry(trav->curr, struct xt_target, list);
+ return (*target->name == '\0') ? 0 :
+ seq_printf(seq, "%s\n", target->name);
+ }
+ return 0;
}
static const struct seq_operations xt_target_seq_ops = {
.start = xt_target_seq_start,
.next = xt_target_seq_next,
- .stop = xt_target_seq_stop,
+ .stop = xt_mttg_seq_stop,
.show = xt_target_seq_show,
};
static int xt_target_open(struct inode *inode, struct file *file)
{
+ struct seq_file *seq;
+ struct nf_mttg_trav *trav;
int ret;
- ret = seq_open(file, &xt_target_seq_ops);
- if (!ret) {
- struct seq_file *seq = file->private_data;
+ trav = kmalloc(sizeof(*trav), GFP_KERNEL);
+ if (trav == NULL)
+ return -ENOMEM;
- seq->private = PDE(inode);
+ ret = seq_open(file, &xt_target_seq_ops);
+ if (ret < 0) {
+ kfree(trav);
+ return ret;
}
- return ret;
+
+ seq = file->private_data;
+ seq->private = trav;
+ trav->nfproto = (unsigned long)PDE(inode)->data;
+ return 0;
}
static const struct file_operations xt_target_ops = {
.open = xt_target_open,
.read = seq_read,
.llseek = seq_lseek,
- .release = seq_release,
+ .release = seq_release_private,
};
#define FORMAT_TABLES "_tables_names"
struct recent_entry *e;
char buf[sizeof("+b335:1d35:1e55:dead:c0de:1715:5afe:c0de")];
const char *c = buf;
- union nf_inet_addr addr;
+ union nf_inet_addr addr = {};
u_int16_t family;
bool add, succ;
return 0;
}
+/**
+ * netlink_set_err - report error to broadcast listeners
+ * @ssk: the kernel netlink socket, as returned by netlink_kernel_create()
+ * @pid: the PID of a process that we want to skip (if any)
+ * @groups: the broadcast group that will notice the error
+ * @code: error code, must be negative (as usual in kernelspace)
+ */
void netlink_set_err(struct sock *ssk, u32 pid, u32 group, int code)
{
struct netlink_set_err_data info;
info.exclude_sk = ssk;
info.pid = pid;
info.group = group;
- info.code = code;
+ /* sk->sk_err wants a positive error value */
+ info.code = -code;
read_lock(&nl_table_lock);
if (R_tab == NULL)
goto failure;
- if (!est && (ret == ACT_P_CREATED ||
- !gen_estimator_active(&police->tcf_bstats,
- &police->tcf_rate_est))) {
- err = -EINVAL;
- goto failure;
- }
-
if (parm->peakrate.rate) {
P_tab = qdisc_get_rtab(&parm->peakrate,
tb[TCA_POLICE_PEAKRATE]);
&police->tcf_lock, est);
if (err)
goto failure_unlock;
+ } else if (tb[TCA_POLICE_AVRATE] &&
+ (ret == ACT_P_CREATED ||
+ !gen_estimator_active(&police->tcf_bstats,
+ &police->tcf_rate_est))) {
+ err = -EINVAL;
+ goto failure_unlock;
}
/* No failure allowed after this point */
{
struct drr_sched *q = qdisc_priv(sch);
struct drr_class *cl = (struct drr_class *)*arg;
+ struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_DRR_MAX + 1];
u32 quantum;
int err;
- err = nla_parse_nested(tb, TCA_DRR_MAX, tca[TCA_OPTIONS], drr_policy);
+ if (!opt)
+ return -EINVAL;
+
+ err = nla_parse_nested(tb, TCA_DRR_MAX, opt, drr_policy);
if (err < 0)
return err;
static int sctp_ctl_sock_init(void)
{
int err;
- sa_family_t family;
+ sa_family_t family = PF_INET;
if (sctp_get_pf_specific(PF_INET6))
family = PF_INET6;
- else
- family = PF_INET;
err = inet_ctl_sock_create(&sctp_ctl_sock, family,
SOCK_SEQPACKET, IPPROTO_SCTP, &init_net);
+
+ /* If IPv6 socket could not be created, try the IPv4 socket */
+ if (err < 0 && family == PF_INET6)
+ err = inet_ctl_sock_create(&sctp_ctl_sock, AF_INET,
+ SOCK_SEQPACKET, IPPROTO_SCTP,
+ &init_net);
+
if (err < 0) {
printk(KERN_ERR
"SCTP: Failed to create the SCTP control socket.\n");
out:
return status;
err_v6_add_protocol:
- sctp_v6_del_protocol();
-err_add_protocol:
sctp_v4_del_protocol();
+err_add_protocol:
inet_ctl_sock_destroy(sctp_ctl_sock);
err_ctl_sock_init:
sctp_v6_protosw_exit();
sctp_v4_pf_exit();
sctp_v6_pf_exit();
sctp_sysctl_unregister();
- list_del(&sctp_af_inet.list);
free_pages((unsigned long)sctp_port_hashtable,
get_order(sctp_port_hashsize *
sizeof(struct sctp_bind_hashbucket)));
sctp_v4_pf_exit();
sctp_sysctl_unregister();
- list_del(&sctp_af_inet.list);
free_pages((unsigned long)sctp_assoc_hashtable,
get_order(sctp_assoc_hashsize *
struct sctp_association *asoc,
struct sctp_chunk *chunk)
{
- struct sctp_operr_chunk *operr_chunk;
struct sctp_errhdr *err_hdr;
+ struct sctp_ulpevent *ev;
- operr_chunk = (struct sctp_operr_chunk *)chunk->chunk_hdr;
- err_hdr = &operr_chunk->err_hdr;
+ while (chunk->chunk_end > chunk->skb->data) {
+ err_hdr = (struct sctp_errhdr *)(chunk->skb->data);
- switch (err_hdr->cause) {
- case SCTP_ERROR_UNKNOWN_CHUNK:
- {
- struct sctp_chunkhdr *unk_chunk_hdr;
+ ev = sctp_ulpevent_make_remote_error(asoc, chunk, 0,
+ GFP_ATOMIC);
+ if (!ev)
+ return;
- unk_chunk_hdr = (struct sctp_chunkhdr *)err_hdr->variable;
- switch (unk_chunk_hdr->type) {
- /* ADDIP 4.1 A9) If the peer responds to an ASCONF with an
- * ERROR chunk reporting that it did not recognized the ASCONF
- * chunk type, the sender of the ASCONF MUST NOT send any
- * further ASCONF chunks and MUST stop its T-4 timer.
- */
- case SCTP_CID_ASCONF:
- asoc->peer.asconf_capable = 0;
- sctp_add_cmd_sf(cmds, SCTP_CMD_TIMER_STOP,
+ sctp_ulpq_tail_event(&asoc->ulpq, ev);
+
+ switch (err_hdr->cause) {
+ case SCTP_ERROR_UNKNOWN_CHUNK:
+ {
+ sctp_chunkhdr_t *unk_chunk_hdr;
+
+ unk_chunk_hdr = (sctp_chunkhdr_t *)err_hdr->variable;
+ switch (unk_chunk_hdr->type) {
+ /* ADDIP 4.1 A9) If the peer responds to an ASCONF with
+ * an ERROR chunk reporting that it did not recognized
+ * the ASCONF chunk type, the sender of the ASCONF MUST
+ * NOT send any further ASCONF chunks and MUST stop its
+ * T-4 timer.
+ */
+ case SCTP_CID_ASCONF:
+ if (asoc->peer.asconf_capable == 0)
+ break;
+
+ asoc->peer.asconf_capable = 0;
+ sctp_add_cmd_sf(cmds, SCTP_CMD_TIMER_STOP,
SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO));
+ break;
+ default:
+ break;
+ }
break;
+ }
default:
break;
}
- break;
- }
- default:
- break;
}
}
sctp_cmd_seq_t *commands)
{
struct sctp_chunk *chunk = arg;
- struct sctp_ulpevent *ev;
if (!sctp_vtag_verify(chunk, asoc))
return sctp_sf_pdiscard(ep, asoc, type, arg, commands);
return sctp_sf_violation_chunklen(ep, asoc, type, arg,
commands);
- while (chunk->chunk_end > chunk->skb->data) {
- ev = sctp_ulpevent_make_remote_error(asoc, chunk, 0,
- GFP_ATOMIC);
- if (!ev)
- goto nomem;
+ sctp_add_cmd_sf(commands, SCTP_CMD_PROCESS_OPERR,
+ SCTP_CHUNK(chunk));
- sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP,
- SCTP_ULPEVENT(ev));
- sctp_add_cmd_sf(commands, SCTP_CMD_PROCESS_OPERR,
- SCTP_CHUNK(chunk));
- }
return SCTP_DISPOSITION_CONSUME;
-
-nomem:
- return SCTP_DISPOSITION_NOMEM;
}
/*
freq_diff = freq_range->end_freq_khz - freq_range->start_freq_khz;
- if (freq_diff <= 0 || freq_range->max_bandwidth_khz > freq_diff)
+ if (freq_range->end_freq_khz <= freq_range->start_freq_khz ||
+ freq_range->max_bandwidth_khz > freq_diff)
return false;
return true;
cmd_gzip = gzip -f -9 < $< > $@
+# Bzip2
+# ---------------------------------------------------------------------------
+
+# Bzip2 does not include size in file... so we have to fake that
+size_append=$(CONFIG_SHELL) $(srctree)/scripts/bin_size
+
+quiet_cmd_bzip2 = BZIP2 $@
+cmd_bzip2 = (bzip2 -9 < $< && $(size_append) $<) > $@ || (rm -f $@ ; false)
+
+# Lzma
+# ---------------------------------------------------------------------------
+
+quiet_cmd_lzma = LZMA $@
+cmd_lzma = (lzma -9 -c $< && $(size_append) $<) >$@ || (rm -f $@ ; false)
--- /dev/null
+#!/bin/sh
+
+if [ $# = 0 ] ; then
+ echo Usage: $0 file
+fi
+
+size_dec=`stat -c "%s" $1`
+size_hex_echo_string=`printf "%08x" $size_dec |
+ sed 's/\(..\)\(..\)\(..\)\(..\)/\\\\x\4\\\\x\3\\\\x\2\\\\x\1/g'`
+/bin/echo -ne $size_hex_echo_string
# Released under the terms of the GNU GPL
#
# Generate a cpio packed initramfs. It uses gen_init_cpio to generate
-# the cpio archive, and gzip to pack it.
+# the cpio archive, and then compresses it.
# The script may also be used to generate the inputfile used for gen_init_cpio
# This script assumes that gen_init_cpio is located in usr/ directory
cat << EOF
Usage:
$0 [-o <file>] [-u <uid>] [-g <gid>] {-d | <cpio_source>} ...
- -o <file> Create gzipped initramfs file named <file> using
- gen_init_cpio and gzip
+ -o <file> Create compressed initramfs file named <file> using
+ gen_init_cpio and compressor depending on the extension
-u <uid> User ID to map to user ID 0 (root).
<uid> is only meaningful if <cpio_source> is a
directory. "squash" forces all files to uid 0.
output="/dev/stdout"
output_file=""
is_cpio_compressed=
+compr="gzip -9 -f"
arg="$1"
case "$arg" in
echo "deps_initramfs := \\"
shift
;;
- "-o") # generate gzipped cpio image named $1
+ "-o") # generate compressed cpio image named $1
shift
output_file="$1"
cpio_list="$(mktemp ${TMPDIR:-/tmp}/cpiolist.XXXXXX)"
output=${cpio_list}
+ echo "$output_file" | grep -q "\.gz$" && compr="gzip -9 -f"
+ echo "$output_file" | grep -q "\.bz2$" && compr="bzip2 -9 -f"
+ echo "$output_file" | grep -q "\.lzma$" && compr="lzma -9 -f"
+ echo "$output_file" | grep -q "\.cpio$" && compr="cat"
shift
;;
esac
esac
done
-# If output_file is set we will generate cpio archive and gzip it
+# If output_file is set we will generate cpio archive and compress it
# we are carefull to delete tmp files
if [ ! -z ${output_file} ]; then
if [ -z ${cpio_file} ]; then
if [ "${is_cpio_compressed}" = "compressed" ]; then
cat ${cpio_tfile} > ${output_file}
else
- cat ${cpio_tfile} | gzip -f -9 - > ${output_file}
+ (cat ${cpio_tfile} | ${compr} - > ${output_file}) \
+ || (rm -f ${output_file} ; false)
fi
[ -z ${cpio_file} ] && rm ${cpio_tfile}
fi
if (!S_ISSOCK(inode->i_mode) ||
((mask & (MAY_WRITE | MAY_APPEND)) == 0))
return 0;
-
sock = SOCKET_I(inode);
sk = sock->sk;
+ if (sk == NULL)
+ return 0;
sksec = sk->sk_security;
- if (sksec->nlbl_state != NLBL_REQUIRE)
+ if (sksec == NULL || sksec->nlbl_state != NLBL_REQUIRE)
return 0;
local_bh_disable();
* looks for host based access restrictions
*
* This version will only be appropriate for really small
- * sets of single label hosts. Because of the masking
- * it cannot shortcut out on the first match. There are
- * numerious ways to address the problem, but none of them
- * have been applied here.
+ * sets of single label hosts.
*
* Returns the label of the far end or NULL if it's not special.
*/
static char *smack_host_label(struct sockaddr_in *sip)
{
struct smk_netlbladdr *snp;
- char *bestlabel = NULL;
struct in_addr *siap = &sip->sin_addr;
- struct in_addr *liap;
- struct in_addr *miap;
- struct in_addr bestmask;
if (siap->s_addr == 0)
return NULL;
- bestmask.s_addr = 0;
-
for (snp = smack_netlbladdrs; snp != NULL; snp = snp->smk_next) {
- liap = &snp->smk_host.sin_addr;
- miap = &snp->smk_mask;
- /*
- * If the addresses match after applying the list entry mask
- * the entry matches the address. If it doesn't move along to
- * the next entry.
- */
- if ((liap->s_addr & miap->s_addr) !=
- (siap->s_addr & miap->s_addr))
- continue;
/*
- * If the list entry mask identifies a single address
- * it can't get any more specific.
+ * we break after finding the first match because
+ * the list is sorted from longest to shortest mask
+ * so we have found the most specific match
*/
- if (miap->s_addr == 0xffffffff)
+ if ((&snp->smk_host.sin_addr)->s_addr ==
+ (siap->s_addr & (&snp->smk_mask)->s_addr)) {
return snp->smk_label;
- /*
- * If the list entry mask is less specific than the best
- * already found this entry is uninteresting.
- */
- if ((miap->s_addr | bestmask.s_addr) == bestmask.s_addr)
- continue;
- /*
- * This is better than any entry found so far.
- */
- bestmask.s_addr = miap->s_addr;
- bestlabel = snp->smk_label;
+ }
}
- return bestlabel;
+ return NULL;
}
/**
return skp;
}
-/*
-#define BEMASK 0x80000000
-*/
-#define BEMASK 0x00000001
#define BEBITS (sizeof(__be32) * 8)
/*
{
struct smk_netlbladdr *skp = (struct smk_netlbladdr *) v;
unsigned char *hp = (char *) &skp->smk_host.sin_addr.s_addr;
- __be32 bebits;
- int maskn = 0;
+ int maskn;
+ u32 temp_mask = be32_to_cpu(skp->smk_mask.s_addr);
- for (bebits = BEMASK; bebits != 0; maskn++, bebits <<= 1)
- if ((skp->smk_mask.s_addr & bebits) == 0)
- break;
+ for (maskn = 0; temp_mask; temp_mask <<= 1, maskn++);
seq_printf(s, "%u.%u.%u.%u/%d %s\n",
hp[0], hp[1], hp[2], hp[3], maskn, skp->smk_label);
return seq_open(file, &netlbladdr_seq_ops);
}
+/**
+ * smk_netlbladdr_insert
+ * @new : netlabel to insert
+ *
+ * This helper insert netlabel in the smack_netlbladdrs list
+ * sorted by netmask length (longest to smallest)
+ */
+static void smk_netlbladdr_insert(struct smk_netlbladdr *new)
+{
+ struct smk_netlbladdr *m;
+
+ if (smack_netlbladdrs == NULL) {
+ smack_netlbladdrs = new;
+ return;
+ }
+
+ /* the comparison '>' is a bit hacky, but works */
+ if (new->smk_mask.s_addr > smack_netlbladdrs->smk_mask.s_addr) {
+ new->smk_next = smack_netlbladdrs;
+ smack_netlbladdrs = new;
+ return;
+ }
+ for (m = smack_netlbladdrs; m != NULL; m = m->smk_next) {
+ if (m->smk_next == NULL) {
+ m->smk_next = new;
+ return;
+ }
+ if (new->smk_mask.s_addr > m->smk_next->smk_mask.s_addr) {
+ new->smk_next = m->smk_next;
+ m->smk_next = new;
+ return;
+ }
+ }
+}
+
+
/**
* smk_write_netlbladdr - write() for /smack/netlabel
* @filp: file pointer, not actually used
struct netlbl_audit audit_info;
struct in_addr mask;
unsigned int m;
- __be32 bebits = BEMASK;
+ u32 mask_bits = (1<<31);
__be32 nsa;
+ u32 temp_mask;
/*
* Must have privilege.
if (sp == NULL)
return -EINVAL;
- for (mask.s_addr = 0; m > 0; m--) {
- mask.s_addr |= bebits;
- bebits <<= 1;
+ for (temp_mask = 0; m > 0; m--) {
+ temp_mask |= mask_bits;
+ mask_bits >>= 1;
}
+ mask.s_addr = cpu_to_be32(temp_mask);
+
+ newname.sin_addr.s_addr &= mask.s_addr;
/*
* Only allow one writer at a time. Writes should be
* quite rare and small in any case.
mutex_lock(&smk_netlbladdr_lock);
nsa = newname.sin_addr.s_addr;
+ /* try to find if the prefix is already in the list */
for (skp = smack_netlbladdrs; skp != NULL; skp = skp->smk_next)
if (skp->smk_host.sin_addr.s_addr == nsa &&
skp->smk_mask.s_addr == mask.s_addr)
rc = 0;
skp->smk_host.sin_addr.s_addr = newname.sin_addr.s_addr;
skp->smk_mask.s_addr = mask.s_addr;
- skp->smk_next = smack_netlbladdrs;
skp->smk_label = sp;
- smack_netlbladdrs = skp;
+ smk_netlbladdr_insert(skp);
}
} else {
rc = netlbl_cfg_unlbl_static_del(&init_net, NULL,
SND_PCI_QUIRK(0x1028, 0x20ac, "Dell Studio Desktop", 0x01),
/* including bogus ALC268 in slot#2 that conflicts with ALC888 */
SND_PCI_QUIRK(0x17c0, 0x4085, "Medion MD96630", 0x01),
+ /* conflict of ALC268 in slot#3 (digital I/O); a temporary fix */
+ SND_PCI_QUIRK(0x1179, 0xff00, "Toshiba laptop", 0x03),
{}
};
SND_PCI_QUIRK(0x103c, 0x1309, "HP xw4*00", ALC262_HP_BPC),
SND_PCI_QUIRK(0x103c, 0x130a, "HP xw6*00", ALC262_HP_BPC),
SND_PCI_QUIRK(0x103c, 0x130b, "HP xw8*00", ALC262_HP_BPC),
+ SND_PCI_QUIRK(0x103c, 0x170b, "HP xw*", ALC262_HP_BPC),
SND_PCI_QUIRK(0x103c, 0x2800, "HP D7000", ALC262_HP_BPC_D7000_WL),
SND_PCI_QUIRK(0x103c, 0x2801, "HP D7000", ALC262_HP_BPC_D7000_WF),
SND_PCI_QUIRK(0x103c, 0x2802, "HP D7000", ALC262_HP_BPC_D7000_WL),
"LFE Playback Volume",
"Side Playback Volume",
"Headphone Playback Volume",
- "Headphone Playback Volume",
+ "Headphone2 Playback Volume",
"Speaker Playback Volume",
"External Speaker Playback Volume",
"Speaker2 Playback Volume",
"LFE Playback Switch",
"Side Playback Switch",
"Headphone Playback Switch",
- "Headphone Playback Switch",
+ "Headphone2 Playback Switch",
"Speaker Playback Switch",
"External Speaker Playback Switch",
"Speaker2 Playback Switch",
if (! spec->autocfg.line_outs)
return 0; /* can't find valid pin config */
+#if 0 /* FIXME: temporarily disabled */
/* If we have no real line-out pin and multiple hp-outs, HPs should
* be set up as multi-channel outputs.
*/
spec->autocfg.line_out_type = AUTO_PIN_HP_OUT;
spec->autocfg.hp_outs = 0;
}
+#endif /* FIXME: temporarily disabled */
if (spec->autocfg.mono_out_pin) {
int dir = get_wcaps(codec, spec->autocfg.mono_out_pin) &
(AC_WCAP_OUT_AMP | AC_WCAP_IN_AMP);
case STAC_DELL_M4_3:
spec->num_dmics = 1;
spec->num_smuxes = 0;
- spec->num_dmuxes = 0;
+ spec->num_dmuxes = 1;
break;
default:
spec->num_dmics = STAC92HD71BXX_NUM_DMICS;
owned by group root in the initial ramdisk image.
If you are not sure, leave it set to "0".
+
+config RD_GZIP
+ bool "Initial ramdisk compressed using gzip"
+ default y
+ depends on BLK_DEV_INITRD=y
+ select DECOMPRESS_GZIP
+ help
+ Support loading of a gzip encoded initial ramdisk or cpio buffer.
+ If unsure, say Y.
+
+config RD_BZIP2
+ bool "Initial ramdisk compressed using bzip2"
+ default n
+ depends on BLK_DEV_INITRD=y
+ select DECOMPRESS_BZIP2
+ help
+ Support loading of a bzip2 encoded initial ramdisk or cpio buffer
+ If unsure, say N.
+
+config RD_LZMA
+ bool "Initial ramdisk compressed using lzma"
+ default n
+ depends on BLK_DEV_INITRD=y
+ select DECOMPRESS_LZMA
+ help
+ Support loading of a lzma encoded initial ramdisk or cpio buffer
+ If unsure, say N.
+
+choice
+ prompt "Built-in initramfs compression mode"
+ help
+ This setting is only meaningful if the INITRAMFS_SOURCE is
+ set. It decides by which algorithm the INITRAMFS_SOURCE will
+ be compressed.
+ Several compression algorithms are available, which differ
+ in efficiency, compression and decompression speed.
+ Compression speed is only relevant when building a kernel.
+ Decompression speed is relevant at each boot.
+
+ If you have any problems with bzip2 or lzma compressed
+ initramfs, mail me (Alain Knaff) <alain@knaff.lu>.
+
+ High compression options are mostly useful for users who
+ are low on disk space (embedded systems), but for whom ram
+ size matters less.
+
+ If in doubt, select 'gzip'
+
+config INITRAMFS_COMPRESSION_NONE
+ bool "None"
+ help
+ Do not compress the built-in initramfs at all. This may
+ sound wasteful in space, but, you should be aware that the
+ built-in initramfs will be compressed at a later stage
+ anyways along with the rest of the kernel, on those
+ architectures that support this.
+ However, not compressing the initramfs may lead to slightly
+ higher memory consumption during a short time at boot, while
+ both the cpio image and the unpacked filesystem image will
+ be present in memory simultaneously
+
+config INITRAMFS_COMPRESSION_GZIP
+ bool "Gzip"
+ depends on RD_GZIP
+ help
+ The old and tried gzip compression. Its compression ratio is
+ the poorest among the 3 choices; however its speed (both
+ compression and decompression) is the fastest.
+
+config INITRAMFS_COMPRESSION_BZIP2
+ bool "Bzip2"
+ depends on RD_BZIP2
+ help
+ Its compression ratio and speed is intermediate.
+ Decompression speed is slowest among the three. The initramfs
+ size is about 10% smaller with bzip2, in comparison to gzip.
+ Bzip2 uses a large amount of memory. For modern kernels you
+ will need at least 8MB RAM or more for booting.
+
+config INITRAMFS_COMPRESSION_LZMA
+ bool "LZMA"
+ depends on RD_LZMA
+ help
+ The most recent compression algorithm.
+ Its ratio is best, decompression speed is between the other
+ two. Compression is slowest. The initramfs size is about 33%
+ smaller with LZMA in comparison to gzip.
+
+endchoice
PHONY += klibcdirs
+# No compression
+suffix_$(CONFIG_INITRAMFS_COMPRESSION_NONE) =
+
+# Gzip, but no bzip2
+suffix_$(CONFIG_INITRAMFS_COMPRESSION_GZIP) = .gz
+
+# Bzip2
+suffix_$(CONFIG_INITRAMFS_COMPRESSION_BZIP2) = .bz2
+
+# Lzma
+suffix_$(CONFIG_INITRAMFS_COMPRESSION_LZMA) = .lzma
+
# Generate builtin.o based on initramfs_data.o
-obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data.o
+obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data$(suffix_y).o
-# initramfs_data.o contains the initramfs_data.cpio.gz image.
+# initramfs_data.o contains the compressed initramfs_data.cpio image.
# The image is included using .incbin, a dependency which is not
# tracked automatically.
-$(obj)/initramfs_data.o: $(obj)/initramfs_data.cpio.gz FORCE
+$(obj)/initramfs_data$(suffix_y).o: $(obj)/initramfs_data.cpio$(suffix_y) FORCE
#####
# Generate the initramfs cpio archive
$(if $(CONFIG_INITRAMFS_ROOT_UID), -u $(CONFIG_INITRAMFS_ROOT_UID)) \
$(if $(CONFIG_INITRAMFS_ROOT_GID), -g $(CONFIG_INITRAMFS_ROOT_GID))
-# .initramfs_data.cpio.gz.d is used to identify all files included
+# .initramfs_data.cpio.d is used to identify all files included
# in initramfs and to detect if any files are added/removed.
# Removed files are identified by directory timestamp being updated
# The dependency list is generated by gen_initramfs.sh -l
-ifneq ($(wildcard $(obj)/.initramfs_data.cpio.gz.d),)
- include $(obj)/.initramfs_data.cpio.gz.d
+ifneq ($(wildcard $(obj)/.initramfs_data.cpio.d),)
+ include $(obj)/.initramfs_data.cpio.d
endif
quiet_cmd_initfs = GEN $@
cmd_initfs = $(initramfs) -o $@ $(ramfs-args) $(ramfs-input)
-targets := initramfs_data.cpio.gz
+targets := initramfs_data.cpio.gz initramfs_data.cpio.bz2 initramfs_data.cpio.lzma initramfs_data.cpio
# do not try to update files included in initramfs
$(deps_initramfs): ;
$(deps_initramfs): klibcdirs
-# We rebuild initramfs_data.cpio.gz if:
-# 1) Any included file is newer then initramfs_data.cpio.gz
+# We rebuild initramfs_data.cpio if:
+# 1) Any included file is newer then initramfs_data.cpio
# 2) There are changes in which files are included (added or deleted)
-# 3) If gen_init_cpio are newer than initramfs_data.cpio.gz
+# 3) If gen_init_cpio are newer than initramfs_data.cpio
# 4) arguments to gen_initramfs.sh changes
-$(obj)/initramfs_data.cpio.gz: $(obj)/gen_init_cpio $(deps_initramfs) klibcdirs
- $(Q)$(initramfs) -l $(ramfs-input) > $(obj)/.initramfs_data.cpio.gz.d
+$(obj)/initramfs_data.cpio$(suffix_y): $(obj)/gen_init_cpio $(deps_initramfs) klibcdirs
+ $(Q)$(initramfs) -l $(ramfs-input) > $(obj)/.initramfs_data.cpio.d
$(call if_changed,initfs)
*/
.section .init.ramfs,"a"
-.incbin "usr/initramfs_data.cpio.gz"
+.incbin "usr/initramfs_data.cpio"
--- /dev/null
+/*
+ initramfs_data includes the compressed binary that is the
+ filesystem used for early user space.
+ Note: Older versions of "as" (prior to binutils 2.11.90.0.23
+ released on 2001-07-14) dit not support .incbin.
+ If you are forced to use older binutils than that then the
+ following trick can be applied to create the resulting binary:
+
+
+ ld -m elf_i386 --format binary --oformat elf32-i386 -r \
+ -T initramfs_data.scr initramfs_data.cpio.gz -o initramfs_data.o
+ ld -m elf_i386 -r -o built-in.o initramfs_data.o
+
+ initramfs_data.scr looks like this:
+SECTIONS
+{
+ .init.ramfs : { *(.data) }
+}
+
+ The above example is for i386 - the parameters vary from architectures.
+ Eventually look up LDFLAGS_BLOB in an older version of the
+ arch/$(ARCH)/Makefile to see the flags used before .incbin was introduced.
+
+ Using .incbin has the advantage over ld that the correct flags are set
+ in the ELF header, as required by certain architectures.
+*/
+
+.section .init.ramfs,"a"
+.incbin "usr/initramfs_data.cpio.bz2"
--- /dev/null
+/*
+ initramfs_data includes the compressed binary that is the
+ filesystem used for early user space.
+ Note: Older versions of "as" (prior to binutils 2.11.90.0.23
+ released on 2001-07-14) dit not support .incbin.
+ If you are forced to use older binutils than that then the
+ following trick can be applied to create the resulting binary:
+
+
+ ld -m elf_i386 --format binary --oformat elf32-i386 -r \
+ -T initramfs_data.scr initramfs_data.cpio.gz -o initramfs_data.o
+ ld -m elf_i386 -r -o built-in.o initramfs_data.o
+
+ initramfs_data.scr looks like this:
+SECTIONS
+{
+ .init.ramfs : { *(.data) }
+}
+
+ The above example is for i386 - the parameters vary from architectures.
+ Eventually look up LDFLAGS_BLOB in an older version of the
+ arch/$(ARCH)/Makefile to see the flags used before .incbin was introduced.
+
+ Using .incbin has the advantage over ld that the correct flags are set
+ in the ELF header, as required by certain architectures.
+*/
+
+.section .init.ramfs,"a"
+.incbin "usr/initramfs_data.cpio.gz"
--- /dev/null
+/*
+ initramfs_data includes the compressed binary that is the
+ filesystem used for early user space.
+ Note: Older versions of "as" (prior to binutils 2.11.90.0.23
+ released on 2001-07-14) dit not support .incbin.
+ If you are forced to use older binutils than that then the
+ following trick can be applied to create the resulting binary:
+
+
+ ld -m elf_i386 --format binary --oformat elf32-i386 -r \
+ -T initramfs_data.scr initramfs_data.cpio.gz -o initramfs_data.o
+ ld -m elf_i386 -r -o built-in.o initramfs_data.o
+
+ initramfs_data.scr looks like this:
+SECTIONS
+{
+ .init.ramfs : { *(.data) }
+}
+
+ The above example is for i386 - the parameters vary from architectures.
+ Eventually look up LDFLAGS_BLOB in an older version of the
+ arch/$(ARCH)/Makefile to see the flags used before .incbin was introduced.
+
+ Using .incbin has the advantage over ld that the correct flags are set
+ in the ELF header, as required by certain architectures.
+*/
+
+.section .init.ramfs,"a"
+.incbin "usr/initramfs_data.cpio.lzma"