[openstack]NAT gateway和port不一致导致VM不能到外网

当VM设置完floatingip后,VM还是不能连接外网,排查原因,发现是quantum中设置的问题:

quantum中设置外网为192.168.19.129/25,不设网关,allocation_pools为{“start”: “192.168.19.130”, “end”: “192.168.19.254”}。

root@controller:/usr/src/nova# ip netns exec qrouter-b4721d20-9d39-4d4d-9c37-f18ecb460d02 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.19.129 0.0.0.0 UG 0 0 0 qg-29c30020-2e
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-cd728374-d8
10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-f915c799-96
192.168.19.128 0.0.0.0 255.255.255.128 U 0 0 0 qg-29c30020-2e

路由器的网卡却是:

root@controller:/usr/src/nova# ip netns exec qrouter-b4721d20-9d39-4d4d-9c37-f18ecb460d02 ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1390 (1.3 KB)  TX bytes:1390 (1.3 KB)

qg-29c30020-2e Link encap:Ethernet  HWaddr fa:16:3e:10:18:21  
          inet addr:192.168.19.130  Bcast:192.168.19.255  Mask:255.255.255.128
          inet6 addr: fe80::f816:3eff:fe10:1821/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:87 errors:0 dropped:0 overruns:0 frame:0
          TX packets:67 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:12593 (12.5 KB)  TX bytes:9608 (9.6 KB)

qr-cd728374-d8 Link encap:Ethernet  HWaddr fa:16:3e:d7:5a:2f  
          inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fed7:5a2f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:64 errors:0 dropped:0 overruns:0 frame:0
          TX packets:89 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:9710 (9.7 KB)  TX bytes:10627 (10.6 KB)

qr-f915c799-96 Link encap:Ethernet  HWaddr fa:16:3e:96:89:3a  
          inet addr:10.0.1.1  Bcast:10.0.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe96:893a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:594 (594.0 B)

这两个值是不同的,本应从192.168.19.130路由的数据包均发往192.168.19.129,导致VM无法出去。其实后者是quantum中与外网连接的port中的fixed_ips值:

+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                             |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
| 10d13e25-cc01-4edc-aba4-5e2b3a6dff80 |      | fa:16:3e:e6:9e:30 | {"subnet_id": "169ad3b8-c961-4128-b053-2d6d36afbe1f", "ip_address": "10.0.0.4"}       |
| 29c30020-2e91-4ffa-91e3-a8acef553641 |      | fa:16:3e:10:18:21 | {"subnet_id": "3f53264f-683b-45a8-a7ab-289afd2288b5", "ip_address": "192.168.19.130"} |
| 7e659611-43b3-4f52-b392-28ddd5051bca |      | fa:16:3e:9e:84:c8 | {"subnet_id": "3f53264f-683b-45a8-a7ab-289afd2288b5", "ip_address": "192.168.19.131"} |
| 7f000789-2e36-4aef-8d08-acb700ddde9f |      | fa:16:3e:07:92:81 | {"subnet_id": "169ad3b8-c961-4128-b053-2d6d36afbe1f", "ip_address": "10.0.0.2"}       |
| 91da98b9-e9df-4a2c-b97d-02299d33fe89 |      | fa:16:3e:f7:42:d9 | {"subnet_id": "3f53264f-683b-45a8-a7ab-289afd2288b5", "ip_address": "192.168.19.132"} |
| a132b58c-238a-4b9f-92ce-c47521cda668 |      | fa:16:3e:31:81:8e | {"subnet_id": "169ad3b8-c961-4128-b053-2d6d36afbe1f", "ip_address": "10.0.0.3"}       |
| b1a9afa6-6850-4044-a2b6-cca6c12fc6fa |      | fa:16:3e:89:2e:fb | {"subnet_id": "0636c5f2-70ab-4fb9-a7d5-986c92eaf1aa", "ip_address": "10.0.1.2"}       |
| b629349e-ad6e-427a-8aae-291f55ef4b32 |      | fa:16:3e:31:a2:cf | {"subnet_id": "169ad3b8-c961-4128-b053-2d6d36afbe1f", "ip_address": "10.0.0.5"}       |
| cd728374-d89e-4f64-b437-b3e1580b49e9 |      | fa:16:3e:d7:5a:2f | {"subnet_id": "169ad3b8-c961-4128-b053-2d6d36afbe1f", "ip_address": "10.0.0.1"}       |
| f915c799-96aa-40bf-a3aa-06d43bc1c284 |      | fa:16:3e:96:89:3a | {"subnet_id": "0636c5f2-70ab-4fb9-a7d5-986c92eaf1aa", "ip_address": "10.0.1.1"}       |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+

如果设置该网络的网关为130,提示失败:

# quantum subnet-update userA-public --gateway_ip 192.168.19.130
Gateway ip 192.168.19.130 conflicts with allocation pool 192.168.19.130-192.168.19.254

在quantum代码中体现是:
agent/l3_agent.py

        ex_gw_ip = ex_gw_port['fixed_ips'][0]['ip_address']
        if not ip_lib.device_exists(interface_name,
                                    root_helper=self.root_helper,
                                    namespace=ri.ns_name()):
            
......

        gw_ip = ex_gw_port['subnet']['gateway_ip']
        if ex_gw_port['subnet']['gateway_ip']:
            cmd = ['route', 'add', 'default', 'gw', gw_ip]

不知道为什么这里有两个:ex_gw_ip和gw_ip,不一致导致这个问题。

workaround很简单:

# ip netns exec qrouter-b4721d20-9d39-4d4d-9c37-f18ecb460d02 route del default gw 192.168.19.129
# ip netns exec qrouter-b4721d20-9d39-4d4d-9c37-f18ecb460d02 route add default gw 192.168.19.130

——————————我是分割线——————————————–
上面的workaround很是麻烦,每次重启l3agent都需要添加,我今天看了一下这个问题,其实还是因为我们对neutron网络不太理解造成的。我前面的方法是先将数据包扔到qg-f103a9f2-d6接口,然后在命名空间外的路由表中进行路由决策:

# ip netns exec qrouter-09be29ea-25f6-4a53-b3ab-8d0e13dc7198 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.19.130  0.0.0.0         UG    0      0        0 qg-f103a9f2-d6
100.0.0.0       0.0.0.0         255.255.255.0   U     0      0        0 qr-d8fcb028-ea
192.168.19.0    0.0.0.0         255.255.255.0   U     0      0        0 qg-f103a9f2-d6
200.0.0.0       0.0.0.0         255.255.255.0   U     0      0        0 qr-b422c431-d8

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.19.254  0.0.0.0         UG    100    0        0 br-ex
20.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth1
30.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth2
192.168.19.0    0.0.0.0         255.255.255.0   U     0      0        0 br-ex

数据包的流向是qr-d8fcb028-ea->(namespace routing)->qg-f103a9f2-d6->(routing)->br-ex->eth0->router

其实public-net本身就是一个外网,所以应该跟物理机的网络一致,也就是192.168.19.0/24,网关是物理网关192.168.19.254。这样,每次l3agent都会在命名空间中新建默认路由:

# ip netns exec qrouter-09be29ea-25f6-4a53-b3ab-8d0e13dc7198 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.19.254  0.0.0.0         UG    0      0        0 qg-f103a9f2-d6
100.0.0.0       0.0.0.0         255.255.255.0   U     0      0        0 qr-d8fcb028-ea
192.168.19.0    0.0.0.0         255.255.255.0   U     0      0        0 qg-f103a9f2-d6
200.0.0.0       0.0.0.0         255.255.255.0   U     0      0        0 qr-b422c431-d8

这样数据包到了这个命名空间后,直接经过路由决策从qg-f103a9f2-d6经过br-ex到eth0出去了。虽然数据包流向与前面的一样,但是从命名空间到物理网关还是在一个网络中流动。

VM上不了网的一个原因

Openstack中VM上不了网有很多原因,今天遇到一个,其实之前也遇到过,只是不熟了才调试了半天,悲剧。。。

现象:VM联网速度很慢,例如apt-get update能连上主机,但是半天下载不了多少东西

调试:因为VM能上网,所以开始以为是quantum的l3问题,在命名空间下查看iptables和route,均没有问题。。
root@controller:/usr/src/nova# ip netns
qdhcp-eb2fc4cd-d656-4e64-adc2-001d3cfbcebd
qrouter-b4721d20-9d39-4d4d-9c37-f18ecb460d02
qdhcp-77a8d872-103a-4d8c-9f47-bc6ec34a2ff4
qdhcp-faacf658-dae9-4230-8fbc-7cde47c425b1

然后用抓包:
ip netns exec qrouter-b4721d20-9d39-4d4d-9c37-f18ecb460d02 tcpdump -i qg-29c30020-2e (外网网卡)
15:20:00.465882 IP 192.168.19.131 > likho.canonical.com: ICMP 192.168.19.131 unreachable – need to frag (mtu 1454), length 556
15:20:02.046704 IP likho.canonical.com.http > 192.168.19.131.56147: Flags [.], seq 1:1449, ack 256, win 61, options [nop,nop,TS val 3517237056 ecr 105897], length 1448
15:20:02.046825 IP 192.168.19.131 > likho.canonical.com: ICMP 192.168.19.131 unreachable – need to frag (mtu 1454), length 556
ip netns exec qrouter-b4721d20-9d39-4d4d-9c37-f18ecb460d02 tcpdump -i qr-cd728374-d8 (内网网卡)
15:20:02.046763 IP likho.canonical.com.http > 10.0.0.4.56147: Flags [.], seq 1:1449, ack 256, win 61, options [nop,nop,TS val 3517237056 ecr 105897], length 1448
15:20:02.046800 IP 10.0.0.4 > likho.canonical.com: ICMP 10.0.0.4 unreachable – need to frag (mtu 1454), length 556
15:20:02.541472 IP sudice.canonical.com.http > 10.0.0.4.39679: Flags [.], seq 1:1449, ack 258, win 61, options [nop,nop,TS val 1537132828 ecr 105323], length 1448
15:20:02.541502 IP 10.0.0.4 > sudice.canonical.com: ICMP 10.0.0.4 unreachable – need to frag (mtu 1454), length 556
发现出现很多unreachable的问题,开始以为是内网没经过SNAT出去,后来才发现错误信息重点是need to frag

原因是VM的MTU太小了,需要设置一个大一些的值,那么处理就很简单了,直接在VM中运行:
ifconfig eth0 mtu 1400

DONE

Floodlight REST module

1 download jackson, restlet lib

2 create AntiDDoSResource extending ServerResource

public class AntiDDoSResource extends ServerResource {

	@Get("json")
    public Object handleRequest() {
		ISecurityAppProviderService service = 
                (ISecurityAppProviderService)getContext().getAttributes().
                get(ISecurityAppProviderService.class.getCanonicalName());

        String op = (String) getRequestAttributes().get("op");
        String obj = (String) getRequestAttributes().get("obj");

        // REST API check status
        if (op.equalsIgnoreCase("status")) {
            if (service.isEnabled())
                return "{\"result\" : \"ADS enabled\"}";
            else
                return "{\"result\" : \"ADS disabled\"}";
        }

        // REST API enable firewall
        if (op.equalsIgnoreCase("enable")) {
        	service.run();
            return "{\"status\" : \"success\", \"details\" : \"ADS running\"}";
        } 
        
        // REST API disable firewall
        if (op.equalsIgnoreCase("disable")) {
            service.terminate();
            return "{\"status\" : \"success\", \"details\" : \"ADS stopped\"}";
        } 

        // no known options found
        return "{\"status\" : \"failure\", \"details\" : \"invalid operation: "+op+"/"+obj+"\"}";
    }

3 create a ADSWebRoutable class implementing RestletRoutable

public class ADSWebRoutable implements RestletRoutable {
@Override
public Router getRestlet(Context context) {
Router router = new Router(context);
router.attach("/{op}/json", AntiDDoSResource.class);
router.attach("/{op}/{obj}/json", AntiDDoSResource.class);
return router;
}

/**
* Set the base path for the Firewall
*/
@Override
public String basePath() {
return "/app/ads";
}
}

3 bind the ADSWebRoutable with

floodlight ignore subnet gateway due to PORT_DOWN and LINK_DOWN

Liu Wenmao
May 7
to openstack
hi

I use quantum grizzly with namespace and floodlight, but VMs can not ping its gateway. It seems that floodlight ignore devices whose status is PORT_DOWN or LINK_DOWN, somehow the subnetwork gateway is really PORT_DOWN and LINK_DOWN.. is it normal?or how can I change its status to normal?

root@controller:~# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000e2ed9e9b6942
n_tables:255, n_buffers:256
features: capabilities:0xc7, actions:0xfff
 1(qr-c5496165-c7): addr:5e:67:22:5b:d5:0e
     config:     PORT_DOWN
     state:      LINK_DOWN
 2(qr-8af2e01f-bb): addr:e4:00:00:00:00:00<--------------------this is the gateway.....
     config:     PORT_DOWN
     state:      LINK_DOWN
 3(qr-48c69382-4f): addr:22:64:6f:3a:9f:cd
     config:     PORT_DOWN
     state:      LINK_DOWN
 4(patch-tun): addr:8e:90:4c:aa:d2:06
     config:     0
     state:      0
 5(tap5b5891ac-94): addr:6e:52:f7:c1:ef:f4
     config:     PORT_DOWN
     state:      LINK_DOWN
 6(tap09a002af-66): addr:c6:cb:01:60:3f:8a
     config:     PORT_DOWN
     state:      LINK_DOWN
 7(tap160480aa-84): addr:96:43:cc:05:71:d5
     config:     PORT_DOWN
     state:      LINK_DOWN
 8(tapf6040ba0-b5): addr:e4:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 9(tap0ded1c0f-df): addr:12:c8:b3:5c:fb:6a
     config:     PORT_DOWN
     state:      LINK_DOWN
 10(tapaebb6140-31): addr:e4:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 11(tapddc3ce63-2b): addr:e4:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 12(qr-9b9a3229-19): addr:e4:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 LOCAL(br-int): addr:e2:ed:9e:9b:69:42
     config:     PORT_DOWN
     state:      LINK_DOWN
OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0

floodlight codes:
if (entity.hasSwitchPort() &&
!topology.isAttachmentPointPort(entity.getSwitchDPID(),
entity.getSwitchPort().shortValue())) {
if (logger.isDebugEnabled()) {
logger.debug("Not learning new device on internal"
+ " link: {}", entity);
}

public boolean portEnabled(OFPhysicalPort port) {
if (port == null)
return false;
if ((port.getConfig() & OFPortConfig.OFPPC_PORT_DOWN.getValue()) > 0)
return false;

git over http and the first repos push

Apache virtual host setup

cat /etc/apache2/sites-enabled/git

<VirtualHost 192.168.1.1:80>
ServerName git.expr.nsfocus
DocumentRoot /var/www/git/

SetEnv GIT_PROJECT_ROOT /var/www/git/
SetEnv GIT_HTTP_EXPORT_ALL
ScriptAlias /repos/ /usr/lib/git-core/git-http-backend/
Options Indexes FollowSymLinks MultiViews
<Location /repos/>
AuthType Basic
AuthName "Restricted Files"
AuthUserFile /etc/apache2/password.dav
Require valid-user
</Location>
</VirtualHost>

htpasswd -c  /etc/apache2/password.dav youraccount

Server side repos init

mkdir -p /var/www/git/app/
git init –bare(此处为两个-)
git update-server-info
chown www-data.www-data -R /var/www/git/app/

Client side repos push

cd d:\app
edit .gitignore
git init
git add *
git commit -a -m “first commit”
git remote add origin http://git.expr.region/app/
git push -u origin master

 

 

Floodlight加载和运行模块的原理

这两天简单看了一下Floodlight的模块机制,大概了解了其插件的机制和流 程,编写了一个非常简单的模块,为大家分享一下
Floodlight加载和运行模块的原理
Floodlight的入口是net.floodlightcontroller.core.Main, 在这个类的main函数中,使用FloodlightModuleLoader加载所有的模块,然后再 调用net.floodlightcontroller.restserver.RestApiServer模块的run方法,启动 rest服务器,最后调用net.floodlightcontroller.core.internal.Controller模块的 run方法,启动网络控制器。
 
   public static void main(String[] args) throws FloodlightModuleException {
        ...
        FloodlightModuleLoader fml = new FloodlightModuleLoader();
        IFloodlightModuleContext moduleContext = fml.loadModulesFromConfig(settings.getModuleFile());
        IRestApiService restApi = moduleContext.getServiceImpl(IRestApiService.class);
        restApi.run();
        IFloodlightProviderService controller =
                moduleContext.getServiceImpl(IFloodlightProviderService.class);
        controller.run();
    }
在加载模块的过程中,使用 FloodlightModuleLoader类的findAllModules方法寻找所有在$CLASSPATH路径中的实现了 net.floodlightcontroller.core.module.IFloodlightModule接口的模块,然后用initModules 初始化这些模块,最后用startupModules启动这些模块。
FloodlightModuleLoad.java:303
    initModules(moduleSet);
    startupModules(moduleSet);
以MemoryStorageSource类为例,它继承了 NoSqlStorageSource类,后者又继承了AbstractStorageSource类,而 AbstractStorageSource是实现了IFloodlightModule和 IStorageSourceService两个接口。
IFloodlightModule有一个init方法和一个startUp方法,所以MemoryStorageSource会分别实现这两个方法,并在初始化和启动的时候被调用。



一个模块的简单实例
下面简单介绍如何新建一个模块的过程,以一个IDSController为例,现在 只实现定时打印日志的功能:
1 定义服务com.nsfocus.ids.IIntrusionDetectionService:
 
public interface IIntrusionDetectionService  extends IFloodlightService {
}
2 新建一个实现IFloodlightModule接口的类com.nsfocus.ids.IDSController:
分别实现接口IFloodlightModule的 getModuleServices、getServiceImpls、getModuleDependencies等方法,这三个方法 主要用于初始化模块
系统运行 ServiceLoader.load(IFloodlightModule.class, cl)后会将IDSController加载到支持IFloodlightModule模块的列表中。

3 在配置文件中添加新模块:
在target\bin\floodlightdefault.properties添加:

net.floodlightcontroller.perfmon.PktInProcessingTime,\
net.floodlightcontroller.ui.web.StaticWebRoutable,\
com.nsfocus.ids.IDSController
net.floodlightcontroller.restserver.RestApiServer.port = 8080


这样新模块在加载时就被添加到集合configMods中
此外,要在bin\META-INF\services\net.floodlightcontroller.core.module.ISecurityControllerModule中添加一行
    com.nsfocus.ids.IDSController
代码在运行到FloodlightModuleLoader.findAllModules的ServiceLoader.load方法时,会加载第二个文件中所列的所有模块,如果只在第一个文件中列出来,而没有在第二个文件中模块列出来的话,会抛出SecurityControllerModuleException的异常。所以一定要在两个文件中都添加该模块。

4 在IDSController中添加相应的功能
4.1 添加初始化代码

@Override
public void init(FloodlightModuleContext context) throws FloodlightModuleException {
    threadPool = context.getServiceImpl(IThreadPoolService.class);
    Map<String, String> configOptions = context.getConfigParams(this); 
    try {
        String detectTimeout = configOptions.get("detecttimeout");
        if (detectTimeout != null) {
            DETECT_TASK_INTERVAL = Short.parseShort(detectTimeout);
        }
     } catch (NumberFormatException e) {
        log.warn("Error parsing detecting timeout, " +
            "using default of {} seconds",
            DETECT_TASK_INTERVAL);
     }
     log.debug("NSFOCUS IDS controller initialized");
}

4.2 添加启动代码
@Override
public void startUp(FloodlightModuleContext context) {
    log.debug("NSFOCUS IDS controller started");
    ScheduledExecutorService ses = threadPool.getScheduledExecutor();
    detectTask = new SingletonTask(ses, new Runnable() {
        @Override
        public void run() {
            try {
                detect();
                detectTask.reschedule(DETECT_TASK_INTERVAL, TimeUnit.SECONDS);
            } catch (Exception e) {
                log.error("Exception in IDS detector", e);
            } finally {
            }
        }
     });
    detectTask.reschedule(DETECT_TASK_INTERVAL, TimeUnit.SECONDS);
}
public void detect(){
   log.debug("Detecting...");
}

这样,floodlight在初始化时自动调用IDSController的 init方法,随后在启动时会自动调用IDSController的startUp方法,上面代码使用了定时器,实现定时触发的功能。

下面是部分启动日志:

 


16:02:51.242 [main] DEBUG n.f.core.internal.Controller – OFListeners for PACKET_IN: linkdiscovery,topology,devicemanager,firewall,forwarding,
16:02:51.302 [main] INFO  n.f.core.internal.Controller – Listening for switch connections on 0.0.0.0/0.0.0.0:6633
16:02:53.611 [debugserver-main] INFO  n.f.jython.JythonServer – Starting DebugServer on port 6655
16:02:55.095 [pool-3-thread-15] DEBUG com.nsfocus.ids.IDSController – Detecting…
16:02:59.096 [pool-3-thread-11] DEBUG com.nsfocus.ids.IDSController – Detecting…

P.S. 其他成员变量声明为:

    protected static Logger log = LoggerFactory.getLogger(IDSController.class);
    protected SingletonTask detectTask;
    protected int DETECT_TASK_INTERVAL = 2;
    protected IThreadPoolService threadPool;

魔高一尺,道高一丈

背景:街道阿姨告诉我党员学习搞积分,上某先锋网一个,满90分钟为止,鼠标不动就不算时间。反人类啊!

码农背景的我仔细看了一下计时相关的部分,用js实现。于是上个月写了一行代码,用chrome+javascript书签搞定了。参见以前我发的微博链接

这个月一看,发现代码重写了,增加了阅读页数限制,究竟是哪个天杀的码农,难道看到我的微博了?更发指的是还增加了浏览器限制,为了自己的便利一遍一遍地强奸小白用户,某些中国程序员的特色

2

不过重写的代码还是用js实现的,照样搞定!效果如图

1

代码奉上:

javascript:function refreshpage(){ count=10;addtime(); setTimeout('refreshpage()',10000); } refreshpage();

每十秒钟读十页,我还算好党员吧?

还是那句话,XX这种事情何必呢,码农别为难码农… 如有各种不服,下个月继续pk,顺便提一句,用javascript实现各种限制就是纯粹耍流氓

801.11p与其他801.11协议的区别

801.11p与其他801.11协议的第一个区别在于,STA可以不在BSS的通信范围内。如果outsid dot11OCBEnabled 被设置,那么STA和BSS就可以通信,这样可以减少两者建立连接的时间。dot11OCBEnabled 表示在BSS环境外部

欧洲有同样的标准 portal.etsi.org/docbox/STF/STF373_ITS5_security_5GHz/STFworkarea/…/ES202663_v0.1.1_Milestone%20B.doc

除了通信距离的变化,此外还有温度的变化,其中一类为-40到85度,专门适应户外的交通环境。

latex的两栏论文中插入一栏并排图像

还是wikipedia有用,在这里

具体的是:

usepackage{subfig}

begin{figure}
centering
subfloat[A gull]{label{fig:gull}includegraphics[width=0.3textwidth]{gull}}
subfloat[A tiger]{label{fig:tiger}includegraphics[width=0.3textwidth]{tiger}}
subfloat[A mouse]{label{fig:mouse}includegraphics[width=0.3textwidth]{mouse}}
caption{Pictures of animals}
label{fig:animals}
end{figure}